Impact of our current choice of PCI topology
With David's permission, let me forward a discussion regarding PCI topology in qemu. The discussion is how to accelerate hot-plugging in qemu guests, in order to reduce boot time. But that also brought an interesting summary of the impact of the choices Kata makes regarding PCI topology, which I thought was worth sharing with this list.
On 15 Feb 2021, at 07:09, David Gibson <dgibson@redhat.com> wrote:
On Mon, 8 Feb 2021 09:42:22 +0100
I'm curious if the device itself reacts faster than that. If so, could we consider making the delay a kernel command-line option? Maybe that's already done somewhere?
Hm, if we're willing to play fast and loose with the spec, maybe.
The assumption would be that this is only under virtualization, so we may have increased control of the "operator". That may be enough to compensate with "playing fast and loose". Maybe.
Hm, true. My first inclination was to reject that idea, but thinking about it further, we do really control both "sides" of the protocol, so hacking in this way might be a reasonable compromise in the medium term (longer term, I think ACPI hotplug is probably the answer).
[...] [...] [...] [...]
Well, presently kata does something like this:
-device pci-bridge,bus=pcie.0,id=pci-bridge-0,chassis_nr=1,shpc=on,addr=2,romfile= -device virtio-serial-pci,disable-modern=false,id=serial0,romfile=,max_ports=2 -device virtconsole,chardev=charconsole0,id=console0 -device virtio-scsi-pci,id=scsi0,disable-modern=false,romfile= -device virtio-rng-pci,rng=rng0,romfile= -device vhost-vsock-pci,disable-modern=false,vhostfd=3,id=vsock-908331243,guest-cid=908331243,romfile= -device vhost-user-fs-pci,chardev=char-9ebd3233988c581e,tag=kataShared,romfile= -device driver=virtio-net-pci,netdev=network-0,mac=9a:be:ea:cc:8f:8f,disable-modern=false,mq=on,vectors=4,romfile=
whereas libvirt uses a root port:
-device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 -device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 -device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.2,addr=0x0 -device virtio-serial-pci,id=virtio-serial0,bus=pci.3,addr=0x0 -device virtio-blk-pci,bus=pci.4,addr=0x0,drive=libvirt-2-format,id=virtio-disk0,bootindex=1 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:cd:ed:dd,bus=pci.1,addr=0x0 -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.6,addr=0x0
I remember someone mentioning that the kata approach was outdated, and introduced some limitations with respect to hotplugging, but I can't recall the details. What are the implications with respect to hotplug of using pci-bridge?
There are several.
1. Using a bridge means we are using SHPC rather than PCI-e native hotplug, which has the tradeoffs I discussed earlier.
2. On the other hand, using one bridge gives us 32 slots we can potentially plug into, whereas each port just gives us one. This makes managing the number of available hotplug slots much easier, and is the main reason that Kata kind of prefers to use the bridge at the moment.
3. Not strictly related to hotplug, but using a bridge does mean that all the devices under it will be in the same *guest* IOMMU group, even if they're in different host IOMMU groups. That has several implications: - It wouldn't be possible to hand in several VFIO devices and give them each to *different* userspace drivers in the guest (e.g. two separate DPDK applications running as different processes). That's probably not all that likely in practice, since putting separate DPDK apps in separate containers seems a more likely choice. - At also means we can't use some of the devices under the bridge with DPDK while others are used by guest kernel drivers. Since certain storage options involve hotplugging PCI devices to the guest which are then bound to guest drivers, this would rule out using DPDK with those storage configurations
participants (1)
-
Christophe de Dinechin