Eric reminded me that I had forgotten to send this email to kata-dev. I did a quick experiment that I believe confirms that the cgroup accounting for shared memory as mapped by virtiofs and qemu is correct, i.e. that if a same page is mapped in two processes in the cgroup, it is only reported once. Specifically, I did an experiment where: 1) I started a container in a pod 2) I measured the output of ps and of cgroup memory.stat before doing any I/O 3) I did a basic I/O (using dnf install) 4) I measured the increase in rss as reported by "ps" for both qemu and virtiofs (roughly 140M for qemu, 93M for virtiofs in that case) 5) I checked that there was an increase in memory.stat reporting that a) showed a very minor rss increase b) accounted for the largest of the two increases, and not the sum of the two increases. What I find weird is that the idea of "rss" in ps/top and in cgroup would be so different. There is also a "share" field that displays as - and a vsize field that is total virtual memory size, so a bit useless. Details of the experiment below. Starting with a simple ubi8 pod and container, I see qemu using ~320M (319924) and virtiofs using less than 10M, plus the parent process (5604 + 8308): [root@worker-0-0 core]# ps -e -o pid,rss,cmd | grep "qemu\|virtiofsd" 1207264 5604 /usr/libexec/virtiofsd --fd=3 -o source=/run/kata-containers/shared/sandboxes/6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/shared -o cache=auto --syslog -o no_posix_lock -f --thread-pool-size=1 1207270 319924 /usr/libexec/qemu-kiwi -name sandbox-6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3 -uuid de74cc11-f3b3-4ec5-b5f6-ec5f0bb4077b -machine q35,accel=kvm,kernel_irqchip -cpu host,pmu=off -qmp unix:/run/vc/vm/6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/qmp.sock,server,nowait -m 2048M,slots=10,maxmem=17029M -device pci-bridge,bus=pcie.0,id=pci-bridge-0,chassis_nr=1,shpc=on,addr=2,romfile= -device virtio-serial-pci,disable-modern=true,id=serial0,romfile=,max_ports=2 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/console.sock,server,nowait -device virtio-scsi-pci,id=scsi0,disable-modern=true,romfile= -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0,romfile= -device vhost-vsock-pci,disable-modern=true,vhostfd=3,id=vsock-2835622479,guest-cid=2835622479,romfile= -chardev socket,id=char-a45ee029c15635f3,path=/run/vc/vm/6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/vhost-fs.sock -device vhost-user-fs-pci,chardev=char-a45ee029c15635f3,tag=kataShared,romfile= -netdev tap,id=network-0,vhost=on,vhostfds=4,fds=5 -device driver=virtio-net-pci,netdev=network-0,mac=0a:58:0a:82:03:ab,disable-modern=true,mq=on,vectors=4,romfile= -rtc base=utc,driftfix=slew,clock=host -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic --no-reboot -daemonize -object memory-backend-file,id=dimm1,size=2048M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1 -kernel /usr/lib/modules/4.18.0-240.15.1.el8_3.x86_64/vmlinuz -initrd /var/cache/kata-containers/osbuilder-images/4.18.0-240.15.1.el8_3.x86_64/"rhcos"-kata-4.18.0-240.15.1.el8_3.x86_64.initrd -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 cryptomgr.notests net.ifnames=0 pci=lastbus=0 quiet panic=1 nr_cpus=8 scsi_mod.scan=none -pidfile /run/vc/vm/6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/pid -smp 1,cores=1,threads=1,sockets=8,maxcpus=8 1207277 8308 /usr/libexec/virtiofsd --fd=3 -o source=/run/kata-containers/shared/sandboxes/6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/shared -o cache=auto --syslog -o no_posix_lock -f --thread-pool-size=1 I can check that they are put in the same cgroup: [root@worker-0-0 core]# cat /proc/1207270/cgroup | grep memory 4:memory:/vc/kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3 [root@worker-0-0 core]# cat /proc/1207264/cgroup | grep memory 4:memory:/vc/kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3 [root@worker-0-0 core]# cat /proc/1207277/cgroup | grep memory 4:memory:/vc/kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3 Looking into that cgroup memory stats right after boot, I see: rss 25522176 (24M) rss_huge 12582912 (12M) shmem 280879104 (267M) mapped_file 280879104 (267M) Notice that cgroup's idea of rss is different from ps (we have 36M instead of 330M). Presumably, ps counts as resident stuff that cgroup counts as shared or file mapped. Complete output in case there is something important I failed to share: [root@worker-0-0 core]# cd /sys/fs/cgroup/memory/vc/kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice\:crio\:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/ [root@worker-0-0 kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3]# cat memory.stat cache 281960448 rss 25522176 rss_huge 12582912 shmem 280879104 mapped_file 280879104 dirty 0 writeback 0 swap 0 pgpgin 110352 pgpgout 38323 pgfault 108834 pgmajfault 0 inactive_anon 280338432 active_anon 25870336 inactive_file 946176 active_file 0 unevictable 0 hierarchical_memory_limit 9223372036854771712 hierarchical_memsw_limit 9223372036854771712 total_cache 281960448 total_rss 25522176 total_rss_huge 12582912 total_shmem 280879104 total_mapped_file 280879104 total_dirty 0 total_writeback 0 total_swap 0 total_pgpgin 110352 total_pgpgout 38323 total_pgfault 108834 total_pgmajfault 0 total_inactive_anon 280338432 total_active_anon 25870336 total_inactive_file 946176 total_active_file 0 total_unevictable 0 Now I run `dnf install -y procps-ng` from within the guest. Now, the ps output gives me a ps rss that went from ~320M to ~460M for qemu alone, and from ~8M to 101M for the worker virtiofsd: [root@worker-0-0 kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3]# ps -e -o pid,rss,cmd | grep "qemu\|virtiofsd" 1207264 5604 /usr/libexec/virtiofsd 1207270 458516 /usr/libexec/qemu-kiwi 1207277 100888 /usr/libexec/virtiofsd As seen by cgroup, we now have a tiny increase in rss (went from 24M to 25M), no change at all in rss_huge, but shmem and mapped_file went from 267M to 406M: rss 26165248 rss_huge 12582912 shmem 426184704 mapped_file 426184704 Complete output just in case: [root@worker-0-0 kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3]# cat memory.stat cache 477405184 rss 26165248 rss_huge 12582912 shmem 426184704 mapped_file 426184704 dirty 15273984 writeback 0 swap 0 pgpgin 170544 pgpgout 50532 pgfault 163614 pgmajfault 0 inactive_anon 425779200 active_anon 26816512 inactive_file 18788352 active_file 32034816 unevictable 0 hierarchical_memory_limit 9223372036854771712 hierarchical_memsw_limit 9223372036854771712 total_cache 477405184 total_rss 26165248 total_rss_huge 12582912 total_shmem 426184704 total_mapped_file 426184704 total_dirty 15273984 total_writeback 0 total_swap 0 total_pgpgin 170544 total_pgpgout 50532 total_pgfault 163614 total_pgmajfault 0 total_inactive_anon 425779200 total_active_anon 26816512 total_inactive_file 18788352 total_active_file 32034816 total_unevictable 0 [root@worker-0-0 kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3]# [root@worker-0-0 kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3]# Conclusion on this simple example: - ps reports an increase in "rss" of 460M - 320M or roughly 140M, and then another 93M for virtiofs - cgroup reports a total increase of 139M So I would say that the cgroup accounting is correct, i.e. that it is not double-counting. Now the next step is to check what metrics are actually reported to the upper layers of the stack.