<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On 14 Apr 2021, at 20:30, Eric Ernst <<a href="mailto:eric.g.ernst@gmail.com" class="">eric.g.ernst@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><div dir="ltr" class=""><div dir="ltr" class=""><div dir="ltr" class="">> Now the next step is to check what metrics are actually reported to the upper layers of the stack.<br class=""></div><div dir="ltr" class=""><br class=""></div><div class="">Generally we'd be interested in metrics that are used for calculating OOM, eviction. For both (at least for eviction handling in kubelet, it would be the mem cgroups' usage stats via `memory.usage_in_bytes`</div></div></div></div></div></blockquote><div><br class=""></div><div>Yes. And here is where at the moment I have some relatively concerning news, although I am not really done with my drill down yet. I have observed cases where Kata seems to only ever access a little less than 50% of the total available memory on the worker nodes. I just filed <a href="https://github.com/kata-containers/kata-containers/issues/1695" class="">https://github.com/kata-containers/kata-containers/issues/1695</a> to describe the problem.</div><br class=""><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div dir="ltr" class=""><div dir="ltr" class=""><div class=""><br class=""></div><div class="">--Eric</div><div class=""><br class=""></div></div></div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 14, 2021 at 7:50 AM Christophe de Dinechin <<a href="mailto:cdupontd@redhat.com" class="">cdupontd@redhat.com</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Eric reminded me that I had forgotten to send this email to kata-dev.<br class="">
<br class="">
<br class="">
I did a quick experiment that I believe confirms that the cgroup accounting<br class="">
for shared memory as mapped by virtiofs and qemu is correct, i.e. that<br class="">
if a same page is mapped in two processes in the cgroup, it is only<br class="">
reported once.<br class="">
<br class="">
Specifically, I did an experiment where:<br class="">
<br class="">
1) I started a container in a pod<br class="">
2) I measured the output of ps and of cgroup memory.stat before doing any I/O<br class="">
3) I did a basic I/O (using dnf install)<br class="">
4) I measured the increase in rss as reported by "ps" for both qemu and virtiofs (roughly 140M for qemu, 93M for virtiofs in that case)<br class="">
5) I checked that there was an increase in memory.stat reporting that<br class="">
a) showed a very minor rss increase<br class="">
b) accounted for the largest of the two increases, and not the sum of the two increases.<br class="">
<br class="">
What I find weird is that the idea of "rss" in ps/top and in cgroup would be so different. There is also a "share" field that displays as - and a vsize field that is total virtual memory size, so a bit useless.<br class="">
<br class="">
Details of the experiment below.<br class="">
<br class="">
Starting with a simple ubi8 pod and container, I see qemu using ~320M (319924) and virtiofs using less than 10M, plus the parent process (5604 + 8308):<br class="">
<br class="">
[root@worker-0-0 core]# ps -e -o pid,rss,cmd | grep "qemu\|virtiofsd"<br class="">
1207264 5604 /usr/libexec/virtiofsd --fd=3 -o source=/run/kata-containers/shared/sandboxes/6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/shared -o cache=auto --syslog -o no_posix_lock -f --thread-pool-size=1<br class="">
1207270 319924 /usr/libexec/qemu-kiwi -name sandbox-6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3 -uuid de74cc11-f3b3-4ec5-b5f6-ec5f0bb4077b -machine q35,accel=kvm,kernel_irqchip -cpu host,pmu=off -qmp unix:/run/vc/vm/6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/qmp.sock,server,nowait -m 2048M,slots=10,maxmem=17029M -device pci-bridge,bus=pcie.0,id=pci-bridge-0,chassis_nr=1,shpc=on,addr=2,romfile= -device virtio-serial-pci,disable-modern=true,id=serial0,romfile=,max_ports=2 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/console.sock,server,nowait -device virtio-scsi-pci,id=scsi0,disable-modern=true,romfile= -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0,romfile= -device vhost-vsock-pci,disable-modern=true,vhostfd=3,id=vsock-2835622479,guest-cid=2835622479,romfile= -chardev socket,id=char-a45ee029c15635f3,path=/run/vc/vm/6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/vhost-fs.sock -device vhost-user-fs-pci,chardev=char-a45ee029c15635f3,tag=kataShared,romfile= -netdev tap,id=network-0,vhost=on,vhostfds=4,fds=5 -device driver=virtio-net-pci,netdev=network-0,mac=0a:58:0a:82:03:ab,disable-modern=true,mq=on,vectors=4,romfile= -rtc base=utc,driftfix=slew,clock=host -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic --no-reboot -daemonize -object memory-backend-file,id=dimm1,size=2048M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1 -kernel /usr/lib/modules/4.18.0-240.15.1.el8_3.x86_64/vmlinuz -initrd /var/cache/kata-containers/osbuilder-images/4.18.0-240.15.1.el8_3.x86_64/"rhcos"-kata-4.18.0-240.15.1.el8_3.x86_64.initrd -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 cryptomgr.notests net.ifnames=0 pci=lastbus=0 quiet panic=1 nr_cpus=8 scsi_mod.scan=none -pidfile /run/vc/vm/6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/pid -smp 1,cores=1,threads=1,sockets=8,maxcpus=8<br class="">
1207277 8308 /usr/libexec/virtiofsd --fd=3 -o source=/run/kata-containers/shared/sandboxes/6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/shared -o cache=auto --syslog -o no_posix_lock -f --thread-pool-size=1<br class="">
<br class="">
<br class="">
<br class="">
I can check that they are put in the same cgroup:<br class="">
<br class="">
[root@worker-0-0 core]# cat /proc/1207270/cgroup | grep memory<br class="">
4:memory:/vc/kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3<br class="">
[root@worker-0-0 core]# cat /proc/1207264/cgroup | grep memory<br class="">
4:memory:/vc/kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3<br class="">
[root@worker-0-0 core]# cat /proc/1207277/cgroup | grep memory<br class="">
4:memory:/vc/kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3<br class="">
<br class="">
<br class="">
<br class="">
Looking into that cgroup memory stats right after boot, I see:<br class="">
rss 25522176 (24M)<br class="">
rss_huge 12582912 (12M) <br class="">
shmem 280879104 (267M)<br class="">
mapped_file 280879104 (267M)<br class="">
<br class="">
Notice that cgroup's idea of rss is different from ps (we have 36M instead of 330M). Presumably, ps counts as resident stuff that cgroup counts as shared or file mapped.<br class="">
<br class="">
Complete output in case there is something important I failed to share:<br class="">
<br class="">
[root@worker-0-0 core]# cd /sys/fs/cgroup/memory/vc/kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice\:crio\:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3/<br class="">
[root@worker-0-0 kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3]# cat memory.stat <br class="">
cache 281960448<br class="">
rss 25522176<br class="">
rss_huge 12582912<br class="">
shmem 280879104<br class="">
mapped_file 280879104<br class="">
dirty 0<br class="">
writeback 0<br class="">
swap 0<br class="">
pgpgin 110352<br class="">
pgpgout 38323<br class="">
pgfault 108834<br class="">
pgmajfault 0<br class="">
inactive_anon 280338432<br class="">
active_anon 25870336<br class="">
inactive_file 946176<br class="">
active_file 0<br class="">
unevictable 0<br class="">
hierarchical_memory_limit 9223372036854771712<br class="">
hierarchical_memsw_limit 9223372036854771712<br class="">
total_cache 281960448<br class="">
total_rss 25522176<br class="">
total_rss_huge 12582912<br class="">
total_shmem 280879104<br class="">
total_mapped_file 280879104<br class="">
total_dirty 0<br class="">
total_writeback 0<br class="">
total_swap 0<br class="">
total_pgpgin 110352<br class="">
total_pgpgout 38323<br class="">
total_pgfault 108834<br class="">
total_pgmajfault 0<br class="">
total_inactive_anon 280338432<br class="">
total_active_anon 25870336<br class="">
total_inactive_file 946176<br class="">
total_active_file 0<br class="">
total_unevictable 0<br class="">
<br class="">
<br class="">
Now I run `dnf install -y procps-ng` from within the guest. Now, the ps output gives me a ps rss that went from ~320M to ~460M for qemu alone, and from ~8M to 101M for the worker virtiofsd:<br class="">
<br class="">
[root@worker-0-0 kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3]# ps -e -o pid,rss,cmd | grep "qemu\|virtiofsd"<br class="">
1207264 5604 /usr/libexec/virtiofsd<br class="">
1207270 458516 /usr/libexec/qemu-kiwi<br class="">
1207277 100888 /usr/libexec/virtiofsd<br class="">
<br class="">
As seen by cgroup, we now have a tiny increase in rss (went from 24M to 25M), no change at all in rss_huge, but shmem and mapped_file went from 267M to 406M:<br class="">
rss 26165248<br class="">
rss_huge 12582912<br class="">
shmem 426184704<br class="">
mapped_file 426184704<br class="">
<br class="">
Complete output just in case:<br class="">
<br class="">
[root@worker-0-0 kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3]# cat memory.stat <br class="">
cache 477405184<br class="">
rss 26165248<br class="">
rss_huge 12582912<br class="">
shmem 426184704<br class="">
mapped_file 426184704<br class="">
dirty 15273984<br class="">
writeback 0<br class="">
swap 0<br class="">
pgpgin 170544<br class="">
pgpgout 50532<br class="">
pgfault 163614<br class="">
pgmajfault 0<br class="">
inactive_anon 425779200<br class="">
active_anon 26816512<br class="">
inactive_file 18788352<br class="">
active_file 32034816<br class="">
unevictable 0<br class="">
hierarchical_memory_limit 9223372036854771712<br class="">
hierarchical_memsw_limit 9223372036854771712<br class="">
total_cache 477405184<br class="">
total_rss 26165248<br class="">
total_rss_huge 12582912<br class="">
total_shmem 426184704<br class="">
total_mapped_file 426184704<br class="">
total_dirty 15273984<br class="">
total_writeback 0<br class="">
total_swap 0<br class="">
total_pgpgin 170544<br class="">
total_pgpgout 50532<br class="">
total_pgfault 163614<br class="">
total_pgmajfault 0<br class="">
total_inactive_anon 425779200<br class="">
total_active_anon 26816512<br class="">
total_inactive_file 18788352<br class="">
total_active_file 32034816<br class="">
total_unevictable 0<br class="">
[root@worker-0-0 kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3]# <br class="">
[root@worker-0-0 kata_kubepods-besteffort-podfd24e8a3_c942_47fd_abdc_2845368484fa.slice:crio:6b6b51b0e11f4736aa88698f30f9570b0f56b3c7a1620fedb15c6596d08de7e3]# <br class="">
<br class="">
<br class="">
Conclusion on this simple example:<br class="">
<br class="">
- ps reports an increase in "rss" of 460M - 320M or roughly 140M, and then another 93M for virtiofs<br class="">
- cgroup reports a total increase of 139M<br class="">
<br class="">
So I would say that the cgroup accounting is correct, i.e. that it is not double-counting.<br class="">
<br class="">
Now the next step is to check what metrics are actually reported to the upper layers of the stack.<br class="">
<br class="">
<br class="">
<br class="">
<br class="">
<br class="">
_______________________________________________<br class="">
kata-dev mailing list<br class="">
<a href="mailto:kata-dev@lists.katacontainers.io" target="_blank" class="">kata-dev@lists.katacontainers.io</a><br class="">
<a href="http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev" rel="noreferrer" target="_blank" class="">http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev</a><br class="">
</blockquote></div>
</div></blockquote></div><br class=""></body></html>