Thanks for the reply. I was just about to send out my findings - Summary: By default if you don’t specify sockets and cores qemu comes up in 1 core per socket topology. With sockets=1,cores=K (same as max_vcpus), they come all come up in one socket and you can still hot-plug as shown I tried the same with K=255 and it seemed to work, although I don’t know if testing like this without an actual VM is enough qemu-system-x86_64 -qmp unix:/tmp/qmp-sock,server,nowait -smp 1,maxcpus=4 -nographic (QEMU) query-hotpluggable-cpus {"return": [{"type": "qemu64-x86_64-cpu", "vcpus-count": 1, "props": {"socket-id": 3, "core-id": 0, "thread-id": 0}}, {"type": "qemu64-x86_64-cpu", "vcpus-count": 1, "props": {"socket-id": 2, "core-id": 0, "thread-id": 0}}, {"type": "qemu64-x86_64-cpu", "vcpus-count": 1, "props": {"socket-id": 1, "core-id": 0, "thread-id": 0}}, {"qom-path": "/machine/unattached/device[0]", "type": "qemu64-x86_64-cpu", "vcpus-count": 1, "props": {"socket-id": 0, "core-id": 0, "thread-id": 0}}]} qemu-system-x86_64 -qmp unix:/tmp/qmp-sock,server,nowait -smp 1,cores=4,sockets=1,maxcpus=4 -nographic (QEMU) query-hotpluggable-cpus {"return": [{"type": "qemu64-x86_64-cpu", "vcpus-count": 1, "props": {"socket-id": 0, "core-id": 3, "thread-id": 0}}, {"type": "qemu64-x86_64-cpu", "vcpus-count": 1, "props": {"socket-id": 0, "core-id": 2, "thread-id": 0}}, {"type": "qemu64-x86_64-cpu", "vcpus-count": 1, "props": {"socket-id": 0, "core-id": 1, "thread-id": 0}}, {"qom-path": "/machine/unattached/device[0]", "type": "qemu64-x86_64-cpu", "vcpus-count": 1, "props": {"socket-id": 0, "core-id": 0, "thread-id": 0}}]} (QEMU) device_add driver=qemu64-x86_64-cpu socket-id=0 core-id=2 thread-id=0 {"return": {}} (QEMU) query-hotpluggable-cpus {"return": [{"type": "qemu64-x86_64-cpu", "vcpus-count": 1, "props": {"socket-id": 0, "core-id": 3, "thread-id": 0}}, {"qom-path": "/machine/peripheral-anon/device[0]", "type": "qemu64-x86_64-cpu", "vcpus-count": 1, "props": {"socket-id": 0, "core-id": 2, "thread-id": 0}}, {"type": "qemu64-x86_64-cpu", "vcpus-count": 1, "props": {"socket-id": 0, "core-id": 1, "thread-id": 0}}, {"qom-path": "/machine/unattached/device[0]", "type": "qemu64-x86_64-cpu", "vcpus-count": 1, "props": {"socket-id": 0, "core-id": 0, "thread-id": 0}}]} Thanks, Sai -----Original Message----- From: Montes, Julio Sent: Wednesday, June 20, 2018 11:37 AM To: kata-dev@lists.katacontainers.io; Edupuganti, Saikrishna <saikrishna.edupuganti@intel.com> Cc: Ernst, Eric <eric.ernst@intel.com> Subject: Re: CPU constraint translation to Qemu SMP options Hi Sai On Wed, 2018-06-20 at 08:50 -0700, Edupuganti, Saikrishna wrote:
Hi Team,
When translating the K cpu count from Docker/k8s to Qemu options, what is reasoning behind configuring K sockets with 1 core per socket instead of K cores with 1 socket? Did any performance measurements guide this decision?
no, this is to support the maximum number of vCPUs. from https://github.com/containers/virtcontainers/pull/591 "The maximum number of CPUs per VM recommended by KVM is 240. To support this amount of CPUs, the CPU topology must change to 1 CPU 1 Socket, otherwise the VM will fail and print next error message: qemu: max_cpus is too large. APIC ID of last CPU is N" see issue https://github.com/containers/virtcontainers/issues/597 do you see better pnp numbers by changing the topology? threads vs cores vs sockets
When we bring up kata container with 8 cpus it brings up a VM with (8+1) sockets -
read https://github.com/kata-containers/documentation/blob/master/const raints/cpu.md please don't hesitate to ask me if you don't understand above document.
kubectl exec -it ubuntu-kata -- lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 1 On-line CPU(s) list: 0 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 1
kubectl exec -it ubuntu-kata-8cpu-limit -- lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 9 On-line CPU(s) list: 0-8 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 9
Thanks, Sai