At the last arch committee meeting, Peng Tao presented the work they did
CSI and CNI side to expose a port that the runtime can talk to to get device
information.
We will keep discussing about it next week, but I think there are very
interesting options for network-attached storage if we can somehow
make sure we can combine CSI + CNI (I'm not discounting the effort to
translate the network topology from host to guest for such cases, what
Sfefan described as "access to the storage network).
Consider an NFS volume, or iSCSI, or something like that. Now,
let's imagine that we also have some "fast" network e.g. a NIC directly
mapped in the guest, a VF, etc. Then it would make sense to do all
the networking from within the guest e.g. for performance reasons.
But the interaction becomes quite complex in that case:
- CSI has the original storage definition
- The runtime needs to detect that this is a network volume
(I believe your DiskMountInfo has what we need, not 100% sure)
- CNI will have the information on where to access that network
(and generally speaking, we know how to expose that in-guest)
- That turns into an in-guest mount, e.g. NFS
- … plus some routing to make sure we can reach that storage
I am not sure this is exactly what Stefan had in mind, but that is what
popped in my own brain when Peng Tao showed up his slides.
That does not directly answer your question. I believe that as you
propose it, we'd have all the information we need, since we have
an FsType (where presumably we could have "nfs") and a volume
type (where we presumably could see "iscsi"). I have not considered
all the combinations, but at first sight, it looks sane as is ;-)
Thanks
Christophe