On 9 Apr 2021, at 17:38, Eric Ernst <eric.g.ernst@gmail.com> wrote:



On Fri, Apr 9, 2021 at 8:08 AM Christophe de Dinechin <cdupontd@redhat.com> wrote:


On 8 Apr 2021, at 02:07, Eric Ernst <eric.g.ernst@gmail.com> wrote:



On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote:
> I'd like to see if we can create a pattern in upstream Kata Containers to
> help facilitate directly assigning volumes to the VM via virtio-blk,
> skipping any mounts on the host, and avoiding needing to use a shared
> file-system for the particular volume. Some of the benefits in doing this:
>  - we can better isolate the host (no mounted filesystem),

It would be nice to make the mechanism extensible so other types of
volumes can be attached in the future.

It might be desirable to perform an NFS mount inside the sandbox VM
instead of on the host, for example. The downside is that the sandbox VM
needs access to the storage network, but the host kernel is no longer
involved.

Stefan,

Sorry I missed this initial reply. Can you help with identifying how you think we should augment the DiskMountInfo structure to accomodate? 

At the last arch committee meeting, Peng Tao presented the work they did
CSI and CNI side to expose a port that the runtime can talk to to get device
information.


Ah, I did not see that on the AC agenda. Tao, can you share link to that presentation?

Sorry, my bad, not the AC meeting, I think it was the use case meeting.

ETOOMANYMEETINGS

 
We will keep discussing about it next week, but I think there are very
interesting options for network-attached storage if we can somehow
make sure we can combine CSI + CNI (I'm not discounting the effort to
translate the network topology from host to guest for such cases, what
Sfefan described as "access to the storage network).

Consider an NFS volume, or iSCSI, or something like that. Now,
let's imagine that we also have some "fast" network e.g. a NIC directly
mapped in the guest, a VF, etc. Then it would make sense to do all
the networking from within the guest e.g. for performance reasons.

But the interaction becomes quite complex in that case:
- CSI has the original storage definition
- The runtime needs to detect that this is a network volume
  (I believe your DiskMountInfo has what we need, not 100% sure)
- CNI will have the information on where to access that network
  (and generally speaking, we know how to expose that in-guest)

For what you describe would you then expect a second network interface being added to the sandbox, or are you reusing the existing? While this may be extra work needed on the infra operator, I'm not sure if this would necessarily be tied to the direct-assigned part, but more work if you are intending to do direct assigned (ie, chained CNI). AFAICT, we'd still want to communicate the volume information (where source would now be a network location) from CSI to the runtime?

Still open in my mind:
- how are credentials handled for being able to access the remote storage (ie, mounted into the rootfs at specific location that agent is aware of, ala [1]). Or, other suggestions?
- I assume that the remote URL:path can still be communicated via Device field in proposed DirectMount structure (perhaps Device should be renamed to Source).
 
Thanks,

[1] - https://github.com/kata-containers/kata-containers/blob/main/src/runtime/cli/config/configuration-clh.toml.in#L255-L259


 
- That turns into an in-guest mount, e.g. NFS
- … plus some routing to make sure we can reach that storage

I am not sure this is exactly what Stefan had in mind, but that is what
popped in my own brain when Peng Tao showed up his slides.

That does not directly answer your question. I believe that as you
propose it, we'd have all the information we need, since we have
an FsType (where presumably we could have "nfs") and a volume
type (where we presumably could see "iscsi"). I have not considered
all the combinations, but at first sight, it looks sane as is ;-)


Thanks
Christophe