RFC: direct-assigned filesystem volume proposal
Ya'll, I'd like to see if we can create a pattern in upstream Kata Containers to help facilitate directly assigning volumes to the VM via virtio-blk, skipping any mounts on the host, and avoiding needing to use a shared file-system for the particular volume. Some of the benefits in doing this: - we can better isolate the host (no mounted filesystem), - more efficient and much faster data path that non-dax virtiofs. - avoid inode caching/memory overheads of virtiofs on the host. In Kubernetes today, there is not a direct communication channel between CSI and the runtime. To make this work on a shorter timeline, therefore, we need to come up with a less traditional way to communicate between the user, CSI and the runtime. I have a proposal in place that "works" today, and I'd like to get feedback on how to improve it, and see if it could be a good fit for upstream. Please see: the proposal at [1], and the PR associated with that proposal at [2]. Please let me know if you: - have any suggestions on the pattern - you are a user that would find this pattern helpful for their use-cases Thanks, Eric [1] - https://github.com/egernst/kata-containers/blob/da-proposal/docs/design/dire... [2] - https://github.com/kata-containers/kata-containers/pull/1568
On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote:
I'd like to see if we can create a pattern in upstream Kata Containers to help facilitate directly assigning volumes to the VM via virtio-blk, skipping any mounts on the host, and avoiding needing to use a shared file-system for the particular volume. Some of the benefits in doing this: - we can better isolate the host (no mounted filesystem),
It would be nice to make the mechanism extensible so other types of volumes can be attached in the future. It might be desirable to perform an NFS mount inside the sandbox VM instead of on the host, for example. The downside is that the sandbox VM needs access to the storage network, but the host kernel is no longer involved. Stefan
On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
I'd like to see if we can create a pattern in upstream Kata Containers to help facilitate directly assigning volumes to the VM via virtio-blk, skipping any mounts on the host, and avoiding needing to use a shared file-system for the particular volume. Some of the benefits in doing
On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote: this:
- we can better isolate the host (no mounted filesystem),
It would be nice to make the mechanism extensible so other types of volumes can be attached in the future.
It might be desirable to perform an NFS mount inside the sandbox VM instead of on the host, for example. The downside is that the sandbox VM needs access to the storage network, but the host kernel is no longer involved.
Stefan, Sorry I missed this initial reply. Can you help with identifying how you think we should augment the DiskMountInfo structure to accomodate? Eric
Stefan
On 8 Apr 2021, at 02:07, Eric Ernst <eric.g.ernst@gmail.com> wrote:
On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com <mailto:stefanha@redhat.com>> wrote: On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote:
I'd like to see if we can create a pattern in upstream Kata Containers to help facilitate directly assigning volumes to the VM via virtio-blk, skipping any mounts on the host, and avoiding needing to use a shared file-system for the particular volume. Some of the benefits in doing this: - we can better isolate the host (no mounted filesystem),
It would be nice to make the mechanism extensible so other types of volumes can be attached in the future.
It might be desirable to perform an NFS mount inside the sandbox VM instead of on the host, for example. The downside is that the sandbox VM needs access to the storage network, but the host kernel is no longer involved.
Stefan,
Sorry I missed this initial reply. Can you help with identifying how you think we should augment the DiskMountInfo structure to accomodate?
At the last arch committee meeting, Peng Tao presented the work they did CSI and CNI side to expose a port that the runtime can talk to to get device information. We will keep discussing about it next week, but I think there are very interesting options for network-attached storage if we can somehow make sure we can combine CSI + CNI (I'm not discounting the effort to translate the network topology from host to guest for such cases, what Sfefan described as "access to the storage network). Consider an NFS volume, or iSCSI, or something like that. Now, let's imagine that we also have some "fast" network e.g. a NIC directly mapped in the guest, a VF, etc. Then it would make sense to do all the networking from within the guest e.g. for performance reasons. But the interaction becomes quite complex in that case: - CSI has the original storage definition - The runtime needs to detect that this is a network volume (I believe your DiskMountInfo has what we need, not 100% sure) - CNI will have the information on where to access that network (and generally speaking, we know how to expose that in-guest) - That turns into an in-guest mount, e.g. NFS - … plus some routing to make sure we can reach that storage I am not sure this is exactly what Stefan had in mind, but that is what popped in my own brain when Peng Tao showed up his slides. That does not directly answer your question. I believe that as you propose it, we'd have all the information we need, since we have an FsType (where presumably we could have "nfs") and a volume type (where we presumably could see "iscsi"). I have not considered all the combinations, but at first sight, it looks sane as is ;-) Thanks Christophe
On Fri, Apr 9, 2021 at 8:08 AM Christophe de Dinechin <cdupontd@redhat.com> wrote:
On 8 Apr 2021, at 02:07, Eric Ernst <eric.g.ernst@gmail.com> wrote:
On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
I'd like to see if we can create a pattern in upstream Kata Containers to help facilitate directly assigning volumes to the VM via virtio-blk, skipping any mounts on the host, and avoiding needing to use a shared file-system for the particular volume. Some of the benefits in doing
On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote: this:
- we can better isolate the host (no mounted filesystem),
It would be nice to make the mechanism extensible so other types of volumes can be attached in the future.
It might be desirable to perform an NFS mount inside the sandbox VM instead of on the host, for example. The downside is that the sandbox VM needs access to the storage network, but the host kernel is no longer involved.
Stefan,
Sorry I missed this initial reply. Can you help with identifying how you think we should augment the DiskMountInfo structure to accomodate?
At the last arch committee meeting, Peng Tao presented the work they did CSI and CNI side to expose a port that the runtime can talk to to get device information.
Ah, I did not see that on the AC agenda. Tao, can you share link to that presentation?
We will keep discussing about it next week, but I think there are very interesting options for network-attached storage if we can somehow make sure we can combine CSI + CNI (I'm not discounting the effort to translate the network topology from host to guest for such cases, what Sfefan described as "access to the storage network).
Consider an NFS volume, or iSCSI, or something like that. Now, let's imagine that we also have some "fast" network e.g. a NIC directly mapped in the guest, a VF, etc. Then it would make sense to do all the networking from within the guest e.g. for performance reasons.
But the interaction becomes quite complex in that case: - CSI has the original storage definition - The runtime needs to detect that this is a network volume (I believe your DiskMountInfo has what we need, not 100% sure) - CNI will have the information on where to access that network (and generally speaking, we know how to expose that in-guest)
For what you describe would you then expect a second network interface being added to the sandbox, or are you reusing the existing? While this may be extra work needed on the infra operator, I'm not sure if this would necessarily be tied to the direct-assigned part, but more work if you are intending to do direct assigned (ie, chained CNI). AFAICT, we'd still want to communicate the volume information (where source would now be a network location) from CSI to the runtime? Still open in my mind: - how are credentials handled for being able to access the remote storage (ie, mounted into the rootfs at specific location that agent is aware of, ala [1]). Or, other suggestions? - I assume that the remote URL:path can still be communicated via Device field in proposed DirectMount structure (perhaps Device should be renamed to Source). Thanks, [1] - https://github.com/kata-containers/kata-containers/blob/main/src/runtime/cli...
- That turns into an in-guest mount, e.g. NFS - … plus some routing to make sure we can reach that storage
I am not sure this is exactly what Stefan had in mind, but that is what popped in my own brain when Peng Tao showed up his slides.
That does not directly answer your question. I believe that as you propose it, we'd have all the information we need, since we have an FsType (where presumably we could have "nfs") and a volume type (where we presumably could see "iscsi"). I have not considered all the combinations, but at first sight, it looks sane as is ;-)
Thanks Christophe
On Fri, Apr 9, 2021 at 8:38 AM Eric Ernst <eric.g.ernst@gmail.com> wrote:
On Fri, Apr 9, 2021 at 8:08 AM Christophe de Dinechin <cdupontd@redhat.com> wrote:
On 8 Apr 2021, at 02:07, Eric Ernst <eric.g.ernst@gmail.com> wrote:
On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
I'd like to see if we can create a pattern in upstream Kata Containers to help facilitate directly assigning volumes to the VM via virtio-blk, skipping any mounts on the host, and avoiding needing to use a shared file-system for the particular volume. Some of the benefits in doing
On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote: this:
- we can better isolate the host (no mounted filesystem),
It would be nice to make the mechanism extensible so other types of volumes can be attached in the future.
It might be desirable to perform an NFS mount inside the sandbox VM instead of on the host, for example. The downside is that the sandbox VM needs access to the storage network, but the host kernel is no longer involved.
Stefan,
Sorry I missed this initial reply. Can you help with identifying how you think we should augment the DiskMountInfo structure to accomodate?
At the last arch committee meeting, Peng Tao presented the work they did CSI and CNI side to expose a port that the runtime can talk to to get device information.
Ah, I did not see that on the AC agenda. Tao, can you share link to that presentation?
We will keep discussing about it next week, but I think there are very interesting options for network-attached storage if we can somehow make sure we can combine CSI + CNI (I'm not discounting the effort to translate the network topology from host to guest for such cases, what Sfefan described as "access to the storage network).
Consider an NFS volume, or iSCSI, or something like that. Now, let's imagine that we also have some "fast" network e.g. a NIC directly mapped in the guest, a VF, etc. Then it would make sense to do all the networking from within the guest e.g. for performance reasons.
But the interaction becomes quite complex in that case: - CSI has the original storage definition - The runtime needs to detect that this is a network volume (I believe your DiskMountInfo has what we need, not 100% sure) - CNI will have the information on where to access that network (and generally speaking, we know how to expose that in-guest)
For what you describe would you then expect a second network interface being added to the sandbox, or are you reusing the existing? While this may be extra work needed on the infra operator, I'm not sure if this would necessarily be tied to the direct-assigned part, but more work if you are intending to do direct assigned (ie, chained CNI). AFAICT, we'd still want to communicate the volume information (where source would now be a network location) from CSI to the runtime?
Still open in my mind: - how are credentials handled for being able to access the remote storage (ie, mounted into the rootfs at specific location that agent is aware of, ala [1]). Or, other suggestions? - I assume that the remote URL:path can still be communicated via Device field in proposed DirectMount structure (perhaps Device should be renamed to Source).
Thanks,
[1] - https://github.com/kata-containers/kata-containers/blob/main/src/runtime/cli...
Also, reminder we can use the GitHub issue to discuss some of this, for easier tracking: https://github.com/kata-containers/kata-containers/pull/1568#issuecomment-81... --Eric
- That turns into an in-guest mount, e.g. NFS - … plus some routing to make sure we can reach that storage
I am not sure this is exactly what Stefan had in mind, but that is what popped in my own brain when Peng Tao showed up his slides.
That does not directly answer your question. I believe that as you propose it, we'd have all the information we need, since we have an FsType (where presumably we could have "nfs") and a volume type (where we presumably could see "iscsi"). I have not considered all the combinations, but at first sight, it looks sane as is ;-)
Thanks Christophe
On 9 Apr 2021, at 17:38, Eric Ernst <eric.g.ernst@gmail.com> wrote:
On Fri, Apr 9, 2021 at 8:08 AM Christophe de Dinechin <cdupontd@redhat.com <mailto:cdupontd@redhat.com>> wrote:
On 8 Apr 2021, at 02:07, Eric Ernst <eric.g.ernst@gmail.com <mailto:eric.g.ernst@gmail.com>> wrote:
On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com <mailto:stefanha@redhat.com>> wrote: On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote:
I'd like to see if we can create a pattern in upstream Kata Containers to help facilitate directly assigning volumes to the VM via virtio-blk, skipping any mounts on the host, and avoiding needing to use a shared file-system for the particular volume. Some of the benefits in doing this: - we can better isolate the host (no mounted filesystem),
It would be nice to make the mechanism extensible so other types of volumes can be attached in the future.
It might be desirable to perform an NFS mount inside the sandbox VM instead of on the host, for example. The downside is that the sandbox VM needs access to the storage network, but the host kernel is no longer involved.
Stefan,
Sorry I missed this initial reply. Can you help with identifying how you think we should augment the DiskMountInfo structure to accomodate?
At the last arch committee meeting, Peng Tao presented the work they did CSI and CNI side to expose a port that the runtime can talk to to get device information.
Ah, I did not see that on the AC agenda. Tao, can you share link to that presentation?
Sorry, my bad, not the AC meeting, I think it was the use case meeting. ETOOMANYMEETINGS
We will keep discussing about it next week, but I think there are very interesting options for network-attached storage if we can somehow make sure we can combine CSI + CNI (I'm not discounting the effort to translate the network topology from host to guest for such cases, what Sfefan described as "access to the storage network).
Consider an NFS volume, or iSCSI, or something like that. Now, let's imagine that we also have some "fast" network e.g. a NIC directly mapped in the guest, a VF, etc. Then it would make sense to do all the networking from within the guest e.g. for performance reasons.
But the interaction becomes quite complex in that case: - CSI has the original storage definition - The runtime needs to detect that this is a network volume (I believe your DiskMountInfo has what we need, not 100% sure) - CNI will have the information on where to access that network (and generally speaking, we know how to expose that in-guest)
For what you describe would you then expect a second network interface being added to the sandbox, or are you reusing the existing? While this may be extra work needed on the infra operator, I'm not sure if this would necessarily be tied to the direct-assigned part, but more work if you are intending to do direct assigned (ie, chained CNI). AFAICT, we'd still want to communicate the volume information (where source would now be a network location) from CSI to the runtime?
Still open in my mind: - how are credentials handled for being able to access the remote storage (ie, mounted into the rootfs at specific location that agent is aware of, ala [1]). Or, other suggestions? - I assume that the remote URL:path can still be communicated via Device field in proposed DirectMount structure (perhaps Device should be renamed to Source).
Thanks,
[1] - https://github.com/kata-containers/kata-containers/blob/main/src/runtime/cli... <https://github.com/kata-containers/kata-containers/blob/main/src/runtime/cli/config/configuration-clh.toml.in#L255-L259>
- That turns into an in-guest mount, e.g. NFS - … plus some routing to make sure we can reach that storage
I am not sure this is exactly what Stefan had in mind, but that is what popped in my own brain when Peng Tao showed up his slides.
That does not directly answer your question. I believe that as you propose it, we'd have all the information we need, since we have an FsType (where presumably we could have "nfs") and a volume type (where we presumably could see "iscsi"). I have not considered all the combinations, but at first sight, it looks sane as is ;-)
Thanks Christophe
On Wed, Apr 07, 2021 at 05:07:05PM -0700, Eric Ernst wrote:
On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
I'd like to see if we can create a pattern in upstream Kata Containers to help facilitate directly assigning volumes to the VM via virtio-blk, skipping any mounts on the host, and avoiding needing to use a shared file-system for the particular volume. Some of the benefits in doing
On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote: this:
- we can better isolate the host (no mounted filesystem),
It would be nice to make the mechanism extensible so other types of volumes can be attached in the future.
It might be desirable to perform an NFS mount inside the sandbox VM instead of on the host, for example. The downside is that the sandbox VM needs access to the storage network, but the host kernel is no longer involved.
Stefan,
Sorry I missed this initial reply. Can you help with identifying how you think we should augment the DiskMountInfo structure to accomodate?
DiskMountInfo supports common mount(1) parameters. This looks fine. I'm not familiar enough with various storage providers (e.g. GlusterFS) to say if anything is missing. I think it's worth keeping non-block device use cases in mind from the start just to avoid implementing the feature in a way that limits it to attaching block devices. BTW, one aspect of the draft that isn't clear to me: the CSI driver will create a particular file, csiPlugin.json at the root of the volume on the host Can a malicious user put a csiPlugin.json file onto a persistent volume and then attach it to a Kata-enabled container to get the runtime and/or agent to execute mount commands either on the host or in the sandbox VM? Stefan
On 2021/4/12 22:46, Stefan Hajnoczi wrote:
On Wed, Apr 07, 2021 at 05:07:05PM -0700, Eric Ernst wrote:
On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
I'd like to see if we can create a pattern in upstream Kata Containers to help facilitate directly assigning volumes to the VM via virtio-blk, skipping any mounts on the host, and avoiding needing to use a shared file-system for the particular volume. Some of the benefits in doing
On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote: this:
- we can better isolate the host (no mounted filesystem),
It would be nice to make the mechanism extensible so other types of volumes can be attached in the future.
It might be desirable to perform an NFS mount inside the sandbox VM instead of on the host, for example. The downside is that the sandbox VM needs access to the storage network, but the host kernel is no longer involved.
Stefan,
Sorry I missed this initial reply. Can you help with identifying how you think we should augment the DiskMountInfo structure to accomodate?
DiskMountInfo supports common mount(1) parameters. This looks fine.
I'm not familiar enough with various storage providers (e.g. GlusterFS) to say if anything is missing.
I think it's worth keeping non-block device use cases in mind from the start just to avoid implementing the feature in a way that limits it to attaching block devices. +1, as a starter, we can try to make sure the initial API (or json) works for NFS and cifs IMHO.
BTW, one aspect of the draft that isn't clear to me:
the CSI driver will create a particular file, csiPlugin.json at the root of the volume on the host
Can a malicious user put a csiPlugin.json file onto a persistent volume and then attach it to a Kata-enabled container to get the runtime and/or agent to execute mount commands either on the host or in the sandbox VM?
It is possible but it violates the current kata threat model, which is basically we'd have to trust the host. If a user is able to gain root privilege on the host, he/she is capable of doing anything to kata. That said, it is indeed possible to mitigate this by using an RPC based approach between csi and kata. Cheers, Tao
On Wed, Apr 14, 2021 at 11:41:41AM +0800, Peng Tao wrote:
On 2021/4/12 22:46, Stefan Hajnoczi wrote:
On Wed, Apr 07, 2021 at 05:07:05PM -0700, Eric Ernst wrote:
On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote:
BTW, one aspect of the draft that isn't clear to me:
the CSI driver will create a particular file, csiPlugin.json at the root of the volume on the host
Can a malicious user put a csiPlugin.json file onto a persistent volume and then attach it to a Kata-enabled container to get the runtime and/or agent to execute mount commands either on the host or in the sandbox VM?
It is possible but it violates the current kata threat model, which is basically we'd have to trust the host. If a user is able to gain root privilege on the host, he/she is capable of doing anything to kata.
That said, it is indeed possible to mitigate this by using an RPC based approach between csi and kata.
If the host is already compromised then I don't expect Kata to protect anything. My question was about k8s persistent volumes. I wanted to check that the csiPlugin.json file is not interpreted if present on a persistent volume. It should only be interpreted when the CSI driver places it there on the host. The text wasn't completely clear on whether the "root of the volume on the host" refers to the contents of the persistent volume itself (that's unsafe) or to the container runtime's host path (that's safe). Stefan
On 2021/4/14 16:37, Stefan Hajnoczi wrote:
On Wed, Apr 14, 2021 at 11:41:41AM +0800, Peng Tao wrote:
On 2021/4/12 22:46, Stefan Hajnoczi wrote:
On Wed, Apr 07, 2021 at 05:07:05PM -0700, Eric Ernst wrote:
On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote:
BTW, one aspect of the draft that isn't clear to me:
the CSI driver will create a particular file, csiPlugin.json at the root of the volume on the host
Can a malicious user put a csiPlugin.json file onto a persistent volume and then attach it to a Kata-enabled container to get the runtime and/or agent to execute mount commands either on the host or in the sandbox VM?
It is possible but it violates the current kata threat model, which is basically we'd have to trust the host. If a user is able to gain root privilege on the host, he/she is capable of doing anything to kata.
That said, it is indeed possible to mitigate this by using an RPC based approach between csi and kata.
If the host is already compromised then I don't expect Kata to protect anything.
My question was about k8s persistent volumes. I wanted to check that the csiPlugin.json file is not interpreted if present on a persistent volume. It should only be interpreted when the CSI driver places it there on the host. The text wasn't completely clear on whether the "root of the volume on the host" refers to the contents of the persistent volume itself (that's unsafe) or to the container runtime's host path (that's safe).
Ah, good point! Kata needs to differentiate between a PV containing a csiPlugin.json file in it, and a CSI driver "hacked" host path. They are both host directories from Kata's point of view. A possible method is to check if the host path is a mountpoint. Then 1. do not ever try to parse the csiPlugin.json file if it is a mountpoint, and 2. require csi to ensure that the volume host directory is not a mountpoint (IOW do not mount the volume to the host directory) wdyt? Cheers, Tao
Eric, Would your proposal allow open source projects like Minio which use S3 object storage to be able to directly access these from a Kata container without needing a host mount first? I've not used Minio but was just reading about it but below is their CSI driver for direct attach storage. https://github.com/minio/direct-csi Thanks Eric -----Original Message----- From: Peng Tao via kata-dev <kata-dev@lists.katacontainers.io> Sent: Wednesday, April 14, 2021 4:42 AM To: Stefan Hajnoczi <stefanha@redhat.com> Cc: kata-dev <kata-dev@lists.katacontainers.io> Subject: Re: [kata-dev] RFC: direct-assigned filesystem volume proposal On 2021/4/14 16:37, Stefan Hajnoczi wrote:
On Wed, Apr 14, 2021 at 11:41:41AM +0800, Peng Tao wrote:
On 2021/4/12 22:46, Stefan Hajnoczi wrote:
On Wed, Apr 07, 2021 at 05:07:05PM -0700, Eric Ernst wrote:
On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote:
BTW, one aspect of the draft that isn't clear to me:
the CSI driver will create a particular file, csiPlugin.json at the root of the volume on the host
Can a malicious user put a csiPlugin.json file onto a persistent volume and then attach it to a Kata-enabled container to get the runtime and/or agent to execute mount commands either on the host or in the sandbox VM?
It is possible but it violates the current kata threat model, which is basically we'd have to trust the host. If a user is able to gain root privilege on the host, he/she is capable of doing anything to kata.
That said, it is indeed possible to mitigate this by using an RPC based approach between csi and kata.
If the host is already compromised then I don't expect Kata to protect anything.
My question was about k8s persistent volumes. I wanted to check that the csiPlugin.json file is not interpreted if present on a persistent volume. It should only be interpreted when the CSI driver places it there on the host. The text wasn't completely clear on whether the "root of the volume on the host" refers to the contents of the persistent volume itself (that's unsafe) or to the container runtime's host path (that's safe).
Ah, good point! Kata needs to differentiate between a PV containing a csiPlugin.json file in it, and a CSI driver "hacked" host path. They are both host directories from Kata's point of view. A possible method is to check if the host path is a mountpoint. Then 1. do not ever try to parse the csiPlugin.json file if it is a mountpoint, and 2. require csi to ensure that the volume host directory is not a mountpoint (IOW do not mount the volume to the host directory) wdyt? Cheers, Tao _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
On Wed, Apr 14, 2021 at 07:41:32PM +0800, Peng Tao wrote:
On 2021/4/14 16:37, Stefan Hajnoczi wrote:
On Wed, Apr 14, 2021 at 11:41:41AM +0800, Peng Tao wrote:
On 2021/4/12 22:46, Stefan Hajnoczi wrote:
On Wed, Apr 07, 2021 at 05:07:05PM -0700, Eric Ernst wrote:
On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com> wrote:
On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote:
BTW, one aspect of the draft that isn't clear to me:
the CSI driver will create a particular file, csiPlugin.json at the root of the volume on the host
Can a malicious user put a csiPlugin.json file onto a persistent volume and then attach it to a Kata-enabled container to get the runtime and/or agent to execute mount commands either on the host or in the sandbox VM?
It is possible but it violates the current kata threat model, which is basically we'd have to trust the host. If a user is able to gain root privilege on the host, he/she is capable of doing anything to kata.
That said, it is indeed possible to mitigate this by using an RPC based approach between csi and kata.
If the host is already compromised then I don't expect Kata to protect anything.
My question was about k8s persistent volumes. I wanted to check that the csiPlugin.json file is not interpreted if present on a persistent volume. It should only be interpreted when the CSI driver places it there on the host. The text wasn't completely clear on whether the "root of the volume on the host" refers to the contents of the persistent volume itself (that's unsafe) or to the container runtime's host path (that's safe).
Ah, good point! Kata needs to differentiate between a PV containing a csiPlugin.json file in it, and a CSI driver "hacked" host path. They are both host directories from Kata's point of view.
A possible method is to check if the host path is a mountpoint. Then 1. do not ever try to parse the csiPlugin.json file if it is a mountpoint, and 2. require csi to ensure that the volume host directory is not a mountpoint (IOW do not mount the volume to the host directory)
wdyt?
It would be nice if there was an explicit way for kata-runtime to know whether it's looking at a host directory set up by a CSI plugin or an actual mounted PV. The mount point trick sounds okay but I worry that maybe in some environments the directory might be a mount point and that would result in a false positive. I don't have a specific suggestion though because I haven't looked at this in detail. Stefan
On 2021/4/26 23:51, Stefan Hajnoczi wrote:
On Wed, Apr 14, 2021 at 07:41:32PM +0800, Peng Tao wrote:
On 2021/4/14 16:37, Stefan Hajnoczi wrote:
On Wed, Apr 14, 2021 at 11:41:41AM +0800, Peng Tao wrote:
On 2021/4/12 22:46, Stefan Hajnoczi wrote:
On Wed, Apr 07, 2021 at 05:07:05PM -0700, Eric Ernst wrote:
On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com> wrote: > On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote:
BTW, one aspect of the draft that isn't clear to me:
the CSI driver will create a particular file, csiPlugin.json at the root of the volume on the host
Can a malicious user put a csiPlugin.json file onto a persistent volume and then attach it to a Kata-enabled container to get the runtime and/or agent to execute mount commands either on the host or in the sandbox VM?
It is possible but it violates the current kata threat model, which is basically we'd have to trust the host. If a user is able to gain root privilege on the host, he/she is capable of doing anything to kata.
That said, it is indeed possible to mitigate this by using an RPC based approach between csi and kata.
If the host is already compromised then I don't expect Kata to protect anything.
My question was about k8s persistent volumes. I wanted to check that the csiPlugin.json file is not interpreted if present on a persistent volume. It should only be interpreted when the CSI driver places it there on the host. The text wasn't completely clear on whether the "root of the volume on the host" refers to the contents of the persistent volume itself (that's unsafe) or to the container runtime's host path (that's safe).
Ah, good point! Kata needs to differentiate between a PV containing a csiPlugin.json file in it, and a CSI driver "hacked" host path. They are both host directories from Kata's point of view.
A possible method is to check if the host path is a mountpoint. Then 1. do not ever try to parse the csiPlugin.json file if it is a mountpoint, and 2. require csi to ensure that the volume host directory is not a mountpoint (IOW do not mount the volume to the host directory)
wdyt?
It would be nice if there was an explicit way for kata-runtime to know whether it's looking at a host directory set up by a CSI plugin or an actual mounted PV.
Hmm, an API based approach is a good candidate of such an explicit way. Cheers, Tao
The mount point trick sounds okay but I worry that maybe in some environments the directory might be a mount point and that would result in a false positive.
I don't have a specific suggestion though because I haven't looked at this in detail.
Stefan
On 27 Apr 2021, at 13:34, Peng Tao via kata-dev <kata-dev@lists.katacontainers.io> wrote:
On 2021/4/26 23:51, Stefan Hajnoczi wrote:
On Wed, Apr 14, 2021 at 07:41:32PM +0800, Peng Tao wrote:
On 2021/4/14 16:37, Stefan Hajnoczi wrote:
On Wed, Apr 14, 2021 at 11:41:41AM +0800, Peng Tao wrote:
On 2021/4/12 22:46, Stefan Hajnoczi wrote:
On Wed, Apr 07, 2021 at 05:07:05PM -0700, Eric Ernst wrote: > On Thu, Apr 1, 2021 at 7:58 AM Stefan Hajnoczi <stefanha@redhat.com> wrote: >> On Mon, Mar 29, 2021 at 04:49:31PM -0700, Eric Ernst wrote:
BTW, one aspect of the draft that isn't clear to me:
the CSI driver will create a particular file, csiPlugin.json at the root of the volume on the host
Can a malicious user put a csiPlugin.json file onto a persistent volume and then attach it to a Kata-enabled container to get the runtime and/or agent to execute mount commands either on the host or in the sandbox VM?
It is possible but it violates the current kata threat model, which is basically we'd have to trust the host. If a user is able to gain root privilege on the host, he/she is capable of doing anything to kata.
That said, it is indeed possible to mitigate this by using an RPC based approach between csi and kata.
If the host is already compromised then I don't expect Kata to protect anything.
My question was about k8s persistent volumes. I wanted to check that the csiPlugin.json file is not interpreted if present on a persistent volume. It should only be interpreted when the CSI driver places it there on the host. The text wasn't completely clear on whether the "root of the volume on the host" refers to the contents of the persistent volume itself (that's unsafe) or to the container runtime's host path (that's safe).
Ah, good point! Kata needs to differentiate between a PV containing a csiPlugin.json file in it, and a CSI driver "hacked" host path. They are both host directories from Kata's point of view.
A possible method is to check if the host path is a mountpoint. Then 1. do not ever try to parse the csiPlugin.json file if it is a mountpoint, and 2. require csi to ensure that the volume host directory is not a mountpoint (IOW do not mount the volume to the host directory)
wdyt?
It would be nice if there was an explicit way for kata-runtime to know whether it's looking at a host directory set up by a CSI plugin or an actual mounted PV. Hmm, an API based approach is a good candidate of such an explicit way.
Isn't that what your "open a port to communicate with CSI" approach did?
Cheers, Tao
The mount point trick sounds okay but I worry that maybe in some environments the directory might be a mount point and that would result in a false positive. I don't have a specific suggestion though because I haven't looked at this in detail.
Stefan
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
On Tue, Mar 30, 2021 at 2:50 AM Eric Ernst <eric.g.ernst@gmail.com> wrote:
Ya'll,
I'd like to see if we can create a pattern in upstream Kata Containers to help facilitate directly assigning volumes to the VM via virtio-blk, skipping any mounts on the host, and avoiding needing to use a shared file-system for the particular volume. Some of the benefits in doing this: - we can better isolate the host (no mounted filesystem), - more efficient and much faster data path that non-dax virtiofs. - avoid inode caching/memory overheads of virtiofs on the host.
In Kubernetes today, there is not a direct communication channel between CSI and the runtime. To make this work on a shorter timeline, therefore, we need to come up with a less traditional way to communicate between the user, CSI and the runtime. I have a proposal in place that "works" today, and I'd like to get feedback on how to improve it, and see if it could be a good fit for upstream. Please see: the proposal at [1], and the PR associated with that proposal at [2].
I'm not an expert here, but why not use raw block rather than adding the annotation skip-hostmount https://kubernetes.io/blog/2019/03/07/raw-block-volume-support-to-beta/ +Orit Wasserman <owasserm@redhat.com> Thanks, Amnon
Please let me know if you: - have any suggestions on the pattern - you are a user that would find this pattern helpful for their use-cases
Thanks, Eric
[1] - https://github.com/egernst/kata-containers/blob/da-proposal/docs/design/dire... [2] - https://github.com/kata-containers/kata-containers/pull/1568
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
On Tue, Apr 6, 2021 at 1:04 AM Amnon Ilan <ailan@redhat.com> wrote:
On Tue, Mar 30, 2021 at 2:50 AM Eric Ernst <eric.g.ernst@gmail.com> wrote:
Ya'll,
I'd like to see if we can create a pattern in upstream Kata Containers to help facilitate directly assigning volumes to the VM via virtio-blk, skipping any mounts on the host, and avoiding needing to use a shared file-system for the particular volume. Some of the benefits in doing this: - we can better isolate the host (no mounted filesystem), - more efficient and much faster data path that non-dax virtiofs. - avoid inode caching/memory overheads of virtiofs on the host.
In Kubernetes today, there is not a direct communication channel between CSI and the runtime. To make this work on a shorter timeline, therefore, we need to come up with a less traditional way to communicate between the user, CSI and the runtime. I have a proposal in place that "works" today, and I'd like to get feedback on how to improve it, and see if it could be a good fit for upstream. Please see: the proposal at [1], and the PR associated with that proposal at [2].
I'm not an expert here, but why not use raw block rather than adding the annotation skip-hostmount https://kubernetes.io/blog/2019/03/07/raw-block-volume-support-to-beta/
+Orit Wasserman <owasserm@redhat.com>
Hey Amnon, Raw block is supported well in Kata today (it just works); the problem is that the workload/user would then need to utilize a raw block, when in most cases, they want a mounted filesystem to interact with. The direct-assignment work's goal is to provide a mounted filesystem to the container workload. Eric
Thanks, Amnon
Please let me know if you: - have any suggestions on the pattern - you are a user that would find this pattern helpful for their use-cases
Thanks, Eric
[1] - https://github.com/egernst/kata-containers/blob/da-proposal/docs/design/dire... [2] - https://github.com/kata-containers/kata-containers/pull/1568
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
participants (6)
-
Adams, Eric
-
Amnon Ilan
-
Christophe de Dinechin
-
Eric Ernst
-
Peng Tao
-
Stefan Hajnoczi