not-so-common dynamic (not build) kernel configurations: examples and summary
Hi, I mentioned in the architecture call last week two examples of dynamic kernel configuration appearing by default with kata-runtime that came somewhat as a surprise to me, in particular compared to what we'd get with a regular container (e.g. crun). Sorry for the delay in sharing details, here they come. -- Examples: - fq_codel instead of noqueue for default virtio-net interface: - crun on Fedora 32: [root@300cd72baa94 /]# ip li sh eth0 3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0 - kata-runtime: [root@420e660f3870 /]# ip li sh eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff This seems to come from runtime/vendor/github.com/vishvananda/netlink/qdisc.go. I don't know the exact reason, fq_codel might be a sane default choice for almost any environment, and I didn't observe any breakage due to this. - nodad on IPv6 addresses: - crun: [root@300cd72baa94 /]# ip ad sh eth0 3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.88.0.27/16 brd 10.88.255.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::dc87:26ff:fe08:c325/64 scope link valid_lft forever preferred_lft forever - kata-runtime: [root@420e660f3870 /]# ip ad sh eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff inet 10.88.0.28/16 brd 10.88.255.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::84ab:39ff:fe73:7cb8/64 scope link nodad valid_lft forever preferred_lft forever This was introduced by: https://github.com/kata-containers/agent/pull/722/commits/c66b9279cc8ee27397... the reason is clearly explained in comments and was also reiterated by Archana last week, and totally makes sense to me in the general case. I wonder, in this particular case, if we can really assume that any upper stack or any environment can actually guarantee DAD is not needed, but I also couldn't observe any issue due to this, so far. -- My concern is not so much related to networking components themselves at this stage, but rather, more in general, to what might happen with peculiar choices for other subsystems and userspace expectations. I don't have concrete examples about breakages and, again, this is not about build-time kernel configuration. If I understood correctly, bergwolf's proposal from last week goes in the direction of having some kind of facility or approach that allows us to track divergences introduced in kata-runtime compared to regular containers (or to the current configuration of the host kernel), to configure those behaviours and to systematically document particular behaviours. I'm rather new to this, so I don't have a clear picture of how this might apply, in practice, should we choose to start from those two cases as examples. Feedback and further input are warmly welcome! -- Stefano
On 2020/5/5 21:27, Stefano Brivio wrote:
Hi,
I mentioned in the architecture call last week two examples of dynamic kernel configuration appearing by default with kata-runtime that came somewhat as a surprise to me, in particular compared to what we'd get with a regular container (e.g. crun). Sorry for the delay in sharing details, here they come.
-- Examples: - fq_codel instead of noqueue for default virtio-net interface:
- crun on Fedora 32:
[root@300cd72baa94 /]# ip li sh eth0 3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0
- kata-runtime:
[root@420e660f3870 /]# ip li sh eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff
This seems to come from runtime/vendor/github.com/vishvananda/netlink/qdisc.go. I don't know the exact reason, fq_codel might be a sane default choice for almost any environment, and I didn't observe any breakage due to this.
- nodad on IPv6 addresses:
- crun:
[root@300cd72baa94 /]# ip ad sh eth0 3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.88.0.27/16 brd 10.88.255.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::dc87:26ff:fe08:c325/64 scope link valid_lft forever preferred_lft forever
- kata-runtime:
[root@420e660f3870 /]# ip ad sh eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff inet 10.88.0.28/16 brd 10.88.255.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::84ab:39ff:fe73:7cb8/64 scope link nodad valid_lft forever preferred_lft forever
This was introduced by: https://github.com/kata-containers/agent/pull/722/commits/c66b9279cc8ee27397... the reason is clearly explained in comments and was also reiterated by Archana last week, and totally makes sense to me in the general case. I wonder, in this particular case, if we can really assume that any upper stack or any environment can actually guarantee DAD is not needed, but I also couldn't observe any issue due to this, so far. --
My concern is not so much related to networking components themselves at this stage, but rather, more in general, to what might happen with peculiar choices for other subsystems and userspace expectations. I don't have concrete examples about breakages and, again, this is not about build-time kernel configuration.
If I understood correctly, bergwolf's proposal from last week goes in the direction of having some kind of facility or approach that allows us to track divergences introduced in kata-runtime compared to regular containers (or to the current configuration of the host kernel), to configure those behaviours and to systematically document particular behaviours. Hi Stefano,
My main concern about making guest kernel behave like the host kernel is that we might lose the ability to have a customized/optimized kernel just for container use case. There are a lot of kernel config options that are not going to be useful for container workload. So instead of just using the host kernel (for kata containers), I would suggest just using a minimal guest kernel as a basis and start adding new config options/modules as we identify new needs. And that is what we have been doing for Kata Containers in the past years. As far as dynamic kernel options, is it enough to use kernel boot options + sysctls + kernel modules to meet the requirement? I asked it in the AC meeting but there was no conclusion yet. Most containers shouldn't care about specific kernel options. If kernel boot options + sysctls + kernel modules still cannot satisfy it, that should be a very specialized workload and IMO such users (who would likely be very professional at customizing things) can use a customized guest kernel instead of the standard one shipped with Kata Containers. Just my 2 cents... Cheers, Tao
I'm rather new to this, so I don't have a clear picture of how this might apply, in practice, should we choose to start from those two cases as examples. Feedback and further input are warmly welcome!
-- Into something rich and strange.
On Wed, May 6, 2020 at 5:37 AM Peng Tao <tao.peng@linux.alibaba.com> wrote:
On 2020/5/5 21:27, Stefano Brivio wrote:
Hi,
I mentioned in the architecture call last week two examples of dynamic kernel configuration appearing by default with kata-runtime that came somewhat as a surprise to me, in particular compared to what we'd get with a regular container (e.g. crun). Sorry for the delay in sharing details, here they come.
-- Examples: - fq_codel instead of noqueue for default virtio-net interface:
- crun on Fedora 32:
[root@300cd72baa94 /]# ip li sh eth0 3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0
- kata-runtime:
[root@420e660f3870 /]# ip li sh eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff
This seems to come from runtime/vendor/github.com/vishvananda/netlink/qdisc.go. I don't know the exact reason, fq_codel might be a sane default choice for almost any environment, and I didn't observe any breakage due to this.
- nodad on IPv6 addresses:
- crun:
[root@300cd72baa94 /]# ip ad sh eth0 3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.88.0.27/16 brd 10.88.255.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::dc87:26ff:fe08:c325/64 scope link valid_lft forever preferred_lft forever
- kata-runtime:
[root@420e660f3870 /]# ip ad sh eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff inet 10.88.0.28/16 brd 10.88.255.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::84ab:39ff:fe73:7cb8/64 scope link nodad valid_lft forever preferred_lft forever
This was introduced by:
https://github.com/kata-containers/agent/pull/722/commits/c66b9279cc8ee27397...
the reason is clearly explained in comments and was also reiterated by Archana last week, and totally makes sense to me in the general case. I wonder, in this particular case, if we can really assume that any upper stack or any environment can actually guarantee DAD is not needed, but I also couldn't observe any issue due to this, so far. --
My concern is not so much related to networking components themselves at this stage, but rather, more in general, to what might happen with peculiar choices for other subsystems and userspace expectations. I don't have concrete examples about breakages and, again, this is not about build-time kernel configuration.
If I understood correctly, bergwolf's proposal from last week goes in the direction of having some kind of facility or approach that allows us to track divergences introduced in kata-runtime compared to regular containers (or to the current configuration of the host kernel), to configure those behaviours and to systematically document particular behaviours. Hi Stefano,
My main concern about making guest kernel behave like the host kernel is that we might lose the ability to have a customized/optimized kernel just for container use case. There are a lot of kernel config options that are not going to be useful for container workload. So instead of just using the host kernel (for kata containers), I would suggest just using a minimal guest kernel as a basis and start adding new config options/modules as we identify new needs. And that is what we have been doing for Kata Containers in the past years.
Production wise there is a lot of value in having the same kernel on the host and the guest. For example, taking a workload that has been run as a vanila container and then running it on a kata container could require a testing/certification process from scratch if the host/guest kernels are different. Kernel CVEs would also be better managed if the host/guest kernels are the same.
As far as dynamic kernel options, is it enough to use kernel boot options + sysctls + kernel modules to meet the requirement? I asked it in the AC meeting but there was no conclusion yet. Most containers shouldn't care about specific kernel options. If kernel boot options + sysctls + kernel modules still cannot satisfy it, that should be a very specialized workload and IMO such users (who would likely be very professional at customizing things) can use a customized guest kernel instead of the standard one shipped with Kata Containers.
Just my 2 cents...
Cheers, Tao
I'm rather new to this, so I don't have a clear picture of how this might apply, in practice, should we choose to start from those two cases as examples. Feedback and further input are warmly welcome!
-- Into something rich and strange.
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
On 2020/5/6 13:25, Ariel Adam wrote:
On Wed, May 6, 2020 at 5:37 AM Peng Tao <tao.peng@linux.alibaba.com <mailto:tao.peng@linux.alibaba.com>> wrote:
On 2020/5/5 21:27, Stefano Brivio wrote: > Hi, > > I mentioned in the architecture call last week two examples of dynamic > kernel configuration appearing by default with kata-runtime that came > somewhat as a surprise to me, in particular compared to what we'd get > with a regular container (e.g. crun). Sorry for the delay in sharing > details, here they come. > > -- > Examples: > - fq_codel instead of noqueue for default virtio-net interface: > > - crun on Fedora 32: > > [root@300cd72baa94 /]# ip li sh eth0 > 3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default > link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0 > > - kata-runtime: > > [root@420e660f3870 /]# ip li sh eth0 > 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 > link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff > > This seems to come from > runtime/vendor/github.com/vishvananda/netlink/qdisc.go <http://github.com/vishvananda/netlink/qdisc.go>. I don't know the > exact reason, fq_codel might be a sane default choice for almost any > environment, and I didn't observe any breakage due to this. > > - nodad on IPv6 addresses: > > - crun: > > [root@300cd72baa94 /]# ip ad sh eth0 > 3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default > link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0 > inet 10.88.0.27/16 <http://10.88.0.27/16> brd 10.88.255.255 scope global eth0 > valid_lft forever preferred_lft forever > inet6 fe80::dc87:26ff:fe08:c325/64 scope link > valid_lft forever preferred_lft forever > > - kata-runtime: > > [root@420e660f3870 /]# ip ad sh eth0 > 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 > link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff > inet 10.88.0.28/16 <http://10.88.0.28/16> brd 10.88.255.255 scope global eth0 > valid_lft forever preferred_lft forever > inet6 fe80::84ab:39ff:fe73:7cb8/64 scope link nodad > valid_lft forever preferred_lft forever > > This was introduced by: > https://github.com/kata-containers/agent/pull/722/commits/c66b9279cc8ee27397... > the reason is clearly explained in comments and was also reiterated by > Archana last week, and totally makes sense to me in the general case. I > wonder, in this particular case, if we can really assume that any upper > stack or any environment can actually guarantee DAD is not needed, but > I also couldn't observe any issue due to this, so far. > -- > > My concern is not so much related to networking components themselves > at this stage, but rather, more in general, to what might happen with > peculiar choices for other subsystems and userspace expectations. I > don't have concrete examples about breakages and, again, this is not > about build-time kernel configuration. > > If I understood correctly, bergwolf's proposal from last week goes in > the direction of having some kind of facility or approach that allows > us to track divergences introduced in kata-runtime compared to regular > containers (or to the current configuration of the host kernel), to > configure those behaviours and to systematically document particular > behaviours. Hi Stefano,
My main concern about making guest kernel behave like the host kernel is that we might lose the ability to have a customized/optimized kernel just for container use case. There are a lot of kernel config options that are not going to be useful for container workload. So instead of just using the host kernel (for kata containers), I would suggest just using a minimal guest kernel as a basis and start adding new config options/modules as we identify new needs. And that is what we have been doing for Kata Containers in the past years.
Production wise there is a lot of value in having the same kernel on the host and the guest. For example, taking a workload that has been run as a vanila container and then running it on a kata container could require a testing/certification process from scratch if the host/guest kernels are different. Kernel CVEs would also be better managed if the host/guest kernels are the same.
From production experience, it is much easier to upgrade a guest kernel than waiting for the host kernel to be upgraded. So I would suggest that we do not bind Kata Containers kernel to a host's running kernel. Also feature-wise, we can use a newer kernel to run Kata Containers on hosts that are running older kernels. So users running their good old kernels can still make use of new kernel features with Kata Containers. And it makes sense to ship the same kernel for different distributions in order to provide same user experience. And we only need to validate and maintain one guest kernel for all distributions, which is much easier than validating each kernel for each distribution version. OTOH, we do not forbid users from using their running host kernel as the guest kernel. It is pretty easy to configure to do so with Kata Containers. Cheers, Tao -- Into something rich and strange.
On Wed, 6 May 2020 14:11:48 +0800 Peng Tao <tao.peng@linux.alibaba.com> wrote:
On 2020/5/6 13:25, Ariel Adam wrote:
On Wed, May 6, 2020 at 5:37 AM Peng Tao <tao.peng@linux.alibaba.com <mailto:tao.peng@linux.alibaba.com>> wrote:
My main concern about making guest kernel behave like the host kernel is that we might lose the ability to have a customized/optimized kernel just for container use case. There are a lot of kernel config options that are not going to be useful for container workload. So instead of just using the host kernel (for kata containers), I would suggest just using a minimal guest kernel as a basis and start adding new config options/modules as we identify new needs. And that is what we have been doing for Kata Containers in the past years.
Production wise there is a lot of value in having the same kernel on the host and the guest. For example, taking a workload that has been run as a vanila container and then running it on a kata container could require a testing/certification process from scratch if the host/guest kernels are different. Kernel CVEs would also be better managed if the host/guest kernels are the same.
From production experience, it is much easier to upgrade a guest kernel than waiting for the host kernel to be upgraded. So I would suggest that we do not bind Kata Containers kernel to a host's running kernel.
I think nobody is suggesting that they should be forcefully bound, but still I see that usage as a very reasonable possibility (especially for the reasons Ariel mentioned), and that already works to a very good extent.
Also feature-wise, we can use a newer kernel to run Kata Containers on hosts that are running older kernels. So users running their good old kernels can still make use of new kernel features with Kata Containers.
Well, there is actually a reason why they're running older (or newer!) kernels, and that might apply to kata-runtime as well.
And it makes sense to ship the same kernel for different distributions in order to provide same user experience. And we only need to validate and maintain one guest kernel for all distributions, which is much easier than validating each kernel for each distribution version.
While I understand the reasoning behind this, it won't apply in every situation. For example, if there's a security flaw in the kernel, this would have the obvious drawback of requiring two packages (from a distribution perspective) to be upgraded at the same time. There are specific advantages and degrees of consistency both ways. Also mind that Kata Containers doesn't really ship a kernel (neither binary nor source). It ships (useful!) configuration fragments and a script, but you can't control the compiler or the toolchain, or even whether "-g nvidia" or "-g intel" is passed to build-kernel.sh, so, while the scripting undoubtedly takes some burden off the testing effort, I don't see much value going beyond that. This is not the kind of "validation" a distribution does -- which by the way makes perfect sense to me. Let the distribution do that :) -- Stefano
On 2020/5/6 19:54, Stefano Brivio wrote:
On Wed, 6 May 2020 14:11:48 +0800 Peng Tao <tao.peng@linux.alibaba.com> wrote:
On 2020/5/6 13:25, Ariel Adam wrote:
On Wed, May 6, 2020 at 5:37 AM Peng Tao <tao.peng@linux.alibaba.com <mailto:tao.peng@linux.alibaba.com>> wrote:
My main concern about making guest kernel behave like the host kernel is that we might lose the ability to have a customized/optimized kernel just for container use case. There are a lot of kernel config options that are not going to be useful for container workload. So instead of just using the host kernel (for kata containers), I would suggest just using a minimal guest kernel as a basis and start adding new config options/modules as we identify new needs. And that is what we have been doing for Kata Containers in the past years.
Production wise there is a lot of value in having the same kernel on the host and the guest. For example, taking a workload that has been run as a vanila container and then running it on a kata container could require a testing/certification process from scratch if the host/guest kernels are different. Kernel CVEs would also be better managed if the host/guest kernels are the same.
From production experience, it is much easier to upgrade a guest kernel than waiting for the host kernel to be upgraded. So I would suggest that we do not bind Kata Containers kernel to a host's running kernel.
I think nobody is suggesting that they should be forcefully bound, but still I see that usage as a very reasonable possibility (especially for the reasons Ariel mentioned), and that already works to a very good extent.
As I mentioned, we do provide methods for users to configure to use the host kernel for Kata Containers. So the possibility is possible even now.
Also feature-wise, we can use a newer kernel to run Kata Containers on hosts that are running older kernels. So users running their good old kernels can still make use of new kernel features with Kata Containers.
Well, there is actually a reason why they're running older (or newer!) kernels, and that might apply to kata-runtime as well.
Yes. Again, it is already possible to use the same kernel for both host and guest. So noting is broken for them.
And it makes sense to ship the same kernel for different distributions in order to provide same user experience. And we only need to validate and maintain one guest kernel for all distributions, which is much easier than validating each kernel for each distribution version.
While I understand the reasoning behind this, it won't apply in every situation. For example, if there's a security flaw in the kernel, this would have the obvious drawback of requiring two packages (from a distribution perspective) to be upgraded at the same time. There are specific advantages and degrees of consistency both ways.
Yes I agree that there is no one-solution-for-all. That is why we have so many configuration options. It is just about what we enable by default.
Also mind that Kata Containers doesn't really ship a kernel (neither binary nor source). It ships (useful!) configuration fragments and a script, but you can't control the compiler or the toolchain, or even whether "-g nvidia" or "-g intel" is passed to build-kernel.sh, so, while the scripting undoubtedly takes some burden off the testing effort, I don't see much value going beyond that. This is not the kind of "validation" a distribution does -- which by the way makes perfect sense to me. Let the distribution do that :)
It is not just about testing burden. We would want users to have a minimal kernel memory footprint. That is why Kata Containers shipped guest kernel is customized to be very small and only contains what we think is necessary for most container workloads. A distribution host kernel is more general and tends to enable many kernel options that are not going to be useful for a container workload guest. Speaking of letting distributions validate the guest kernel, if a distribution provides a version of kernel that specially targets a cloud use case, it would be a much better fit for Kata Containers, although it is still a different kernel package than the host one. Also we do ship kernel binaries. Checkout the repositories on obs [1] ;-) Cheers, Tao [1] http://download.opensuse.org/repositories/home:/katacontainers:/releases:/x8... -- Into something rich and strange.
* Peng Tao (tao.peng@linux.alibaba.com) wrote:
On 2020/5/6 19:54, Stefano Brivio wrote:
On Wed, 6 May 2020 14:11:48 +0800 Peng Tao <tao.peng@linux.alibaba.com> wrote:
On 2020/5/6 13:25, Ariel Adam wrote:
On Wed, May 6, 2020 at 5:37 AM Peng Tao <tao.peng@linux.alibaba.com <mailto:tao.peng@linux.alibaba.com>> wrote:
My main concern about making guest kernel behave like the host kernel is that we might lose the ability to have a customized/optimized kernel just for container use case. There are a lot of kernel config options that are not going to be useful for container workload. So instead of just using the host kernel (for kata containers), I would suggest just using a minimal guest kernel as a basis and start adding new config options/modules as we identify new needs. And that is what we have been doing for Kata Containers in the past years.
Production wise there is a lot of value in having the same kernel on the host and the guest. For example, taking a workload that has been run as a vanila container and then running it on a kata container could require a testing/certification process from scratch if the host/guest kernels are different. Kernel CVEs would also be better managed if the host/guest kernels are the same.
From production experience, it is much easier to upgrade a guest kernel than waiting for the host kernel to be upgraded. So I would suggest that we do not bind Kata Containers kernel to a host's running kernel.
I think nobody is suggesting that they should be forcefully bound, but still I see that usage as a very reasonable possibility (especially for the reasons Ariel mentioned), and that already works to a very good extent.
As I mentioned, we do provide methods for users to configure to use the host kernel for Kata Containers. So the possibility is possible even now.
Also feature-wise, we can use a newer kernel to run Kata Containers on hosts that are running older kernels. So users running their good old kernels can still make use of new kernel features with Kata Containers.
Well, there is actually a reason why they're running older (or newer!) kernels, and that might apply to kata-runtime as well.
Yes. Again, it is already possible to use the same kernel for both host and guest. So noting is broken for them.
And it makes sense to ship the same kernel for different distributions in order to provide same user experience. And we only need to validate and maintain one guest kernel for all distributions, which is much easier than validating each kernel for each distribution version.
While I understand the reasoning behind this, it won't apply in every situation. For example, if there's a security flaw in the kernel, this would have the obvious drawback of requiring two packages (from a distribution perspective) to be upgraded at the same time. There are specific advantages and degrees of consistency both ways.
Yes I agree that there is no one-solution-for-all. That is why we have so many configuration options. It is just about what we enable by default.
Also mind that Kata Containers doesn't really ship a kernel (neither binary nor source). It ships (useful!) configuration fragments and a script, but you can't control the compiler or the toolchain, or even whether "-g nvidia" or "-g intel" is passed to build-kernel.sh, so, while the scripting undoubtedly takes some burden off the testing effort, I don't see much value going beyond that. This is not the kind of "validation" a distribution does -- which by the way makes perfect sense to me. Let the distribution do that :)
It is not just about testing burden. We would want users to have a minimal kernel memory footprint. That is why Kata Containers shipped guest kernel is customized to be very small and only contains what we think is necessary for most container workloads. A distribution host kernel is more general and tends to enable many kernel options that are not going to be useful for a container workload guest.
Speaking of letting distributions validate the guest kernel, if a distribution provides a version of kernel that specially targets a cloud use case, it would be a much better fit for Kata Containers, although it is still a different kernel package than the host one.
Do we understand which kernel config options are explicit choices by Kata and which are just down to the config that Kata started with? Dave
Also we do ship kernel binaries. Checkout the repositories on obs [1] ;-)
Cheers, Tao
[1] http://download.opensuse.org/repositories/home:/katacontainers:/releases:/x8...
-- Into something rich and strange.
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On 2020/5/6 21:35, Dr. David Alan Gilbert wrote:
* Peng Tao (tao.peng@linux.alibaba.com) wrote:
On 2020/5/6 19:54, Stefano Brivio wrote:
On Wed, 6 May 2020 14:11:48 +0800 Peng Tao <tao.peng@linux.alibaba.com> wrote:
On 2020/5/6 13:25, Ariel Adam wrote:
On Wed, May 6, 2020 at 5:37 AM Peng Tao <tao.peng@linux.alibaba.com <mailto:tao.peng@linux.alibaba.com>> wrote:
My main concern about making guest kernel behave like the host kernel is that we might lose the ability to have a customized/optimized kernel just for container use case. There are a lot of kernel config options that are not going to be useful for container workload. So instead of just using the host kernel (for kata containers), I would suggest just using a minimal guest kernel as a basis and start adding new config options/modules as we identify new needs. And that is what we have been doing for Kata Containers in the past years.
Production wise there is a lot of value in having the same kernel on the host and the guest. For example, taking a workload that has been run as a vanila container and then running it on a kata container could require a testing/certification process from scratch if the host/guest kernels are different. Kernel CVEs would also be better managed if the host/guest kernels are the same.
From production experience, it is much easier to upgrade a guest kernel than waiting for the host kernel to be upgraded. So I would suggest that we do not bind Kata Containers kernel to a host's running kernel.
I think nobody is suggesting that they should be forcefully bound, but still I see that usage as a very reasonable possibility (especially for the reasons Ariel mentioned), and that already works to a very good extent.
As I mentioned, we do provide methods for users to configure to use the host kernel for Kata Containers. So the possibility is possible even now.
Also feature-wise, we can use a newer kernel to run Kata Containers on hosts that are running older kernels. So users running their good old kernels can still make use of new kernel features with Kata Containers.
Well, there is actually a reason why they're running older (or newer!) kernels, and that might apply to kata-runtime as well.
Yes. Again, it is already possible to use the same kernel for both host and guest. So noting is broken for them.
And it makes sense to ship the same kernel for different distributions in order to provide same user experience. And we only need to validate and maintain one guest kernel for all distributions, which is much easier than validating each kernel for each distribution version.
While I understand the reasoning behind this, it won't apply in every situation. For example, if there's a security flaw in the kernel, this would have the obvious drawback of requiring two packages (from a distribution perspective) to be upgraded at the same time. There are specific advantages and degrees of consistency both ways.
Yes I agree that there is no one-solution-for-all. That is why we have so many configuration options. It is just about what we enable by default.
Also mind that Kata Containers doesn't really ship a kernel (neither binary nor source). It ships (useful!) configuration fragments and a script, but you can't control the compiler or the toolchain, or even whether "-g nvidia" or "-g intel" is passed to build-kernel.sh, so, while the scripting undoubtedly takes some burden off the testing effort, I don't see much value going beyond that. This is not the kind of "validation" a distribution does -- which by the way makes perfect sense to me. Let the distribution do that :)
It is not just about testing burden. We would want users to have a minimal kernel memory footprint. That is why Kata Containers shipped guest kernel is customized to be very small and only contains what we think is necessary for most container workloads. A distribution host kernel is more general and tends to enable many kernel options that are not going to be useful for a container workload guest.
Speaking of letting distributions validate the guest kernel, if a distribution provides a version of kernel that specially targets a cloud use case, it would be a much better fit for Kata Containers, although it is still a different kernel package than the host one.
Do we understand which kernel config options are explicit choices by Kata and which are just down to the config that Kata started with?
It was based on clear containers kernel config in the beginning [1]. Maybe Intel folks can tell more about where the clear containers one came from? (Copying Geronimo Orozco who originally committed the clear containers kernel config per [2]) Cheers, Tao [1] https://github.com/kata-containers/linux/pull/5 [2] https://github.com/clearcontainers/packaging/commit/f6c9474aa93435ad05d6259e... -- Into something rich and strange.
* Peng Tao (tao.peng@linux.alibaba.com) wrote:
On 2020/5/6 21:35, Dr. David Alan Gilbert wrote:
* Peng Tao (tao.peng@linux.alibaba.com) wrote:
On 2020/5/6 19:54, Stefano Brivio wrote:
On Wed, 6 May 2020 14:11:48 +0800 Peng Tao <tao.peng@linux.alibaba.com> wrote:
On 2020/5/6 13:25, Ariel Adam wrote:
On Wed, May 6, 2020 at 5:37 AM Peng Tao <tao.peng@linux.alibaba.com <mailto:tao.peng@linux.alibaba.com>> wrote:
My main concern about making guest kernel behave like the host kernel is that we might lose the ability to have a customized/optimized kernel just for container use case. There are a lot of kernel config options that are not going to be useful for container workload. So instead of just using the host kernel (for kata containers), I would suggest just using a minimal guest kernel as a basis and start adding new config options/modules as we identify new needs. And that is what we have been doing for Kata Containers in the past years.
Production wise there is a lot of value in having the same kernel on the host and the guest. For example, taking a workload that has been run as a vanila container and then running it on a kata container could require a testing/certification process from scratch if the host/guest kernels are different. Kernel CVEs would also be better managed if the host/guest kernels are the same.
From production experience, it is much easier to upgrade a guest kernel than waiting for the host kernel to be upgraded. So I would suggest that we do not bind Kata Containers kernel to a host's running kernel.
I think nobody is suggesting that they should be forcefully bound, but still I see that usage as a very reasonable possibility (especially for the reasons Ariel mentioned), and that already works to a very good extent.
As I mentioned, we do provide methods for users to configure to use the host kernel for Kata Containers. So the possibility is possible even now.
Also feature-wise, we can use a newer kernel to run Kata Containers on hosts that are running older kernels. So users running their good old kernels can still make use of new kernel features with Kata Containers.
Well, there is actually a reason why they're running older (or newer!) kernels, and that might apply to kata-runtime as well.
Yes. Again, it is already possible to use the same kernel for both host and guest. So noting is broken for them.
And it makes sense to ship the same kernel for different distributions in order to provide same user experience. And we only need to validate and maintain one guest kernel for all distributions, which is much easier than validating each kernel for each distribution version.
While I understand the reasoning behind this, it won't apply in every situation. For example, if there's a security flaw in the kernel, this would have the obvious drawback of requiring two packages (from a distribution perspective) to be upgraded at the same time. There are specific advantages and degrees of consistency both ways.
Yes I agree that there is no one-solution-for-all. That is why we have so many configuration options. It is just about what we enable by default.
Also mind that Kata Containers doesn't really ship a kernel (neither binary nor source). It ships (useful!) configuration fragments and a script, but you can't control the compiler or the toolchain, or even whether "-g nvidia" or "-g intel" is passed to build-kernel.sh, so, while the scripting undoubtedly takes some burden off the testing effort, I don't see much value going beyond that. This is not the kind of "validation" a distribution does -- which by the way makes perfect sense to me. Let the distribution do that :)
It is not just about testing burden. We would want users to have a minimal kernel memory footprint. That is why Kata Containers shipped guest kernel is customized to be very small and only contains what we think is necessary for most container workloads. A distribution host kernel is more general and tends to enable many kernel options that are not going to be useful for a container workload guest.
Speaking of letting distributions validate the guest kernel, if a distribution provides a version of kernel that specially targets a cloud use case, it would be a much better fit for Kata Containers, although it is still a different kernel package than the host one.
Do we understand which kernel config options are explicit choices by Kata and which are just down to the config that Kata started with?
It was based on clear containers kernel config in the beginning [1]. Maybe Intel folks can tell more about where the clear containers one came from?
(Copying Geronimo Orozco who originally committed the clear containers kernel config per [2])
But I think if the choices was documented, then it would be much easier to justify why a distro might want a specific set of configs for Kata use. Dave
Cheers, Tao
[1] https://github.com/kata-containers/linux/pull/5 [2] https://github.com/clearcontainers/packaging/commit/f6c9474aa93435ad05d6259e...
-- Into something rich and strange.
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On 2020/5/6 22:18, Dr. David Alan Gilbert wrote:
* Peng Tao (tao.peng@linux.alibaba.com) wrote:
On 2020/5/6 21:35, Dr. David Alan Gilbert wrote:
* Peng Tao (tao.peng@linux.alibaba.com) wrote:
On 2020/5/6 19:54, Stefano Brivio wrote:
On Wed, 6 May 2020 14:11:48 +0800 Peng Tao <tao.peng@linux.alibaba.com> wrote:
On 2020/5/6 13:25, Ariel Adam wrote: > > On Wed, May 6, 2020 at 5:37 AM Peng Tao <tao.peng@linux.alibaba.com > <mailto:tao.peng@linux.alibaba.com>> wrote: > > My main concern about making guest kernel behave like the host > kernel is > that we might lose the ability to have a customized/optimized kernel > just for container use case. There are a lot of kernel config options > that are not going to be useful for container workload. So instead of > just using the host kernel (for kata containers), I would suggest just > using a minimal guest kernel as a basis and start adding new config > options/modules as we identify new needs. And that is what we have been > doing for Kata Containers in the past years. > > Production wise there is a lot of value in having the same kernel on the > host and the guest. > For example, taking a workload that has been run as a vanila container > and then running it on a kata container could require a > testing/certification process from scratch if the host/guest kernels are > different. > Kernel CVEs would also be better managed if the host/guest kernels are > the same. From production experience, it is much easier to upgrade a guest kernel than waiting for the host kernel to be upgraded. So I would suggest that we do not bind Kata Containers kernel to a host's running kernel.
I think nobody is suggesting that they should be forcefully bound, but still I see that usage as a very reasonable possibility (especially for the reasons Ariel mentioned), and that already works to a very good extent.
As I mentioned, we do provide methods for users to configure to use the host kernel for Kata Containers. So the possibility is possible even now.
Also feature-wise, we can use a newer kernel to run Kata Containers on hosts that are running older kernels. So users running their good old kernels can still make use of new kernel features with Kata Containers.
Well, there is actually a reason why they're running older (or newer!) kernels, and that might apply to kata-runtime as well.
Yes. Again, it is already possible to use the same kernel for both host and guest. So noting is broken for them.
And it makes sense to ship the same kernel for different distributions in order to provide same user experience. And we only need to validate and maintain one guest kernel for all distributions, which is much easier than validating each kernel for each distribution version.
While I understand the reasoning behind this, it won't apply in every situation. For example, if there's a security flaw in the kernel, this would have the obvious drawback of requiring two packages (from a distribution perspective) to be upgraded at the same time. There are specific advantages and degrees of consistency both ways.
Yes I agree that there is no one-solution-for-all. That is why we have so many configuration options. It is just about what we enable by default.
Also mind that Kata Containers doesn't really ship a kernel (neither binary nor source). It ships (useful!) configuration fragments and a script, but you can't control the compiler or the toolchain, or even whether "-g nvidia" or "-g intel" is passed to build-kernel.sh, so, while the scripting undoubtedly takes some burden off the testing effort, I don't see much value going beyond that. This is not the kind of "validation" a distribution does -- which by the way makes perfect sense to me. Let the distribution do that :)
It is not just about testing burden. We would want users to have a minimal kernel memory footprint. That is why Kata Containers shipped guest kernel is customized to be very small and only contains what we think is necessary for most container workloads. A distribution host kernel is more general and tends to enable many kernel options that are not going to be useful for a container workload guest.
Speaking of letting distributions validate the guest kernel, if a distribution provides a version of kernel that specially targets a cloud use case, it would be a much better fit for Kata Containers, although it is still a different kernel package than the host one.
Do we understand which kernel config options are explicit choices by Kata and which are just down to the config that Kata started with?
It was based on clear containers kernel config in the beginning [1]. Maybe Intel folks can tell more about where the clear containers one came from?
(Copying Geronimo Orozco who originally committed the clear containers kernel config per [2])
But I think if the choices was documented, then it would be much easier to justify why a distro might want a specific set of configs for Kata use. No, I don't think we have documented any of it. It mostly depended on developer experience to form the current kernel config. And TBH, Redhat developers are much better at doing it. So we would love your feedback on which kernel options should be turned on/off.
From distribution point of view, I totally understand your motivation. It is really your call if you want to distribute Kata's default kernel or use any different one for your users. From upstream point of view, the upstream code makes sure that it is easy for distributions to customize. If a customization proves to be useful, it may as well be turned on by default in upstream code base. Taking the default kernel as a example, IMO it is totally possible for the upstream code to switch to a RHEL or CentOS kernel if that turns out to be what most users want. Cheers, Tao
Dave
Cheers, Tao
[1] https://github.com/kata-containers/linux/pull/5 [2] https://github.com/clearcontainers/packaging/commit/f6c9474aa93435ad05d6259e...
-- Into something rich and strange.
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
-- Into something rich and strange.
Hi Tao, On Wed, 6 May 2020 10:36:02 +0800 Peng Tao <tao.peng@linux.alibaba.com> wrote:
On 2020/5/5 21:27, Stefano Brivio wrote:
Hi,
I mentioned in the architecture call last week two examples of dynamic kernel configuration appearing by default with kata-runtime that came somewhat as a surprise to me, in particular compared to what we'd get with a regular container (e.g. crun). Sorry for the delay in sharing details, here they come.
-- Examples: - fq_codel instead of noqueue for default virtio-net interface:
- crun on Fedora 32:
[root@300cd72baa94 /]# ip li sh eth0 3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0
- kata-runtime:
[root@420e660f3870 /]# ip li sh eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff
This seems to come from runtime/vendor/github.com/vishvananda/netlink/qdisc.go. I don't know the exact reason, fq_codel might be a sane default choice for almost any environment, and I didn't observe any breakage due to this.
- nodad on IPv6 addresses:
- crun:
[root@300cd72baa94 /]# ip ad sh eth0 3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.88.0.27/16 brd 10.88.255.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::dc87:26ff:fe08:c325/64 scope link valid_lft forever preferred_lft forever
- kata-runtime:
[root@420e660f3870 /]# ip ad sh eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff inet 10.88.0.28/16 brd 10.88.255.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::84ab:39ff:fe73:7cb8/64 scope link nodad valid_lft forever preferred_lft forever
This was introduced by: https://github.com/kata-containers/agent/pull/722/commits/c66b9279cc8ee27397... the reason is clearly explained in comments and was also reiterated by Archana last week, and totally makes sense to me in the general case. I wonder, in this particular case, if we can really assume that any upper stack or any environment can actually guarantee DAD is not needed, but I also couldn't observe any issue due to this, so far. --
My concern is not so much related to networking components themselves at this stage, but rather, more in general, to what might happen with peculiar choices for other subsystems and userspace expectations. I don't have concrete examples about breakages and, again, this is not about build-time kernel configuration.
If I understood correctly, bergwolf's proposal from last week goes in the direction of having some kind of facility or approach that allows us to track divergences introduced in kata-runtime compared to regular containers (or to the current configuration of the host kernel), to configure those behaviours and to systematically document particular behaviours. Hi Stefano,
My main concern about making guest kernel behave like the host kernel
Wait, that's not what I'm proposing or suggesting in any way, I'm just concerned that we might, more or less knowingly, over time, grow a set of rather specific *runtime* divergences in kernel configuration with what one might reasonably expect from a container environment, and end up breaking reasonable user or userspace expectations without keeping track and without an easy way to fix those expectations later. Going back to my (harmless) examples in this perspective: - fq_codel as default qdisc: I don't know why it's there, git log didn't help, I'm not sure anybody can even remember or explain - nodad on IPv6 addresses: makes sense in a general kata-runtime case, it's well documented in the agent code, but will it be visible enough to others playing with this ecosystem? Should it be configurable, in case one can't ensure it's not needed from "outside"?
is that we might lose the ability to have a customized/optimized kernel just for container use case.
Sure, I'm just trying to find out if there's a way to consistently document (and perhaps make configurable) these customisations or optimisations, at least the most significant ones, also to make them more sustainable in some sense.
There are a lot of kernel config options that are not going to be useful for container workload. So instead of just using the host kernel (for kata containers), I would suggest just using a minimal guest kernel as a basis and start adding new config options/modules as we identify new needs. And that is what we have been doing for Kata Containers in the past years.
Actually, I see this as a separate topic, and it wasn't my intention to raise this here. As a side note, I understand the reasoning behind your suggestion, and I also think Ariel's and your further points are valid in specific situations, that are currently covered by specific Linux distributions. I think configuration fragments are useful, and so is the current script, for testing, integration, and as guidance. It's also something a distribution might decide to ship and integrate, I guess. However, attempts to somehow force that model as the True and Only one looks severely limiting to me, and I don't really see the benefit (more on that in my other reply on this thread).
As far as dynamic kernel options, is it enough to use kernel boot options + sysctls + kernel modules to meet the requirement? I asked it in the AC meeting but there was no conclusion yet.
Yes, I didn't present a requirement in this sense, because I don't have any specific requirement "to be supported" in mind. I was referring to a more general topic of traceability of specific behaviours. Now, if I reuse your input as something that we should consider to include in those specific behaviours, yes, ideally I think that: - boot options that are not explicitly appended via toml configuration, and that are obviously not functional for the specific container scenario (kata-runtime) should be somewhat tracked - sysctl writes (I guess by the agent), same - modules, would the agent really need to "manually" load specific modules? Will it ever need to? Both examples that I presented are about configurations that happen via netlink, so I'd consider that in a tentative list of the kind of configuration that needs to be tracked (and, I'm not sure, perhaps made configurable in any case?).
Most containers shouldn't care about specific kernel options. If kernel boot options + sysctls + kernel modules still cannot satisfy it, that should be a very specialized workload and IMO such users (who would likely be very professional at customizing things) can use a customized guest kernel instead of the standard one shipped with Kata Containers.
Indeed. -- Stefano
On 2020/5/6 19:53, Stefano Brivio wrote:
Hi Tao,
On Wed, 6 May 2020 10:36:02 +0800 Peng Tao <tao.peng@linux.alibaba.com> wrote:
On 2020/5/5 21:27, Stefano Brivio wrote:
Hi,
I mentioned in the architecture call last week two examples of dynamic kernel configuration appearing by default with kata-runtime that came somewhat as a surprise to me, in particular compared to what we'd get with a regular container (e.g. crun). Sorry for the delay in sharing details, here they come.
-- Examples: - fq_codel instead of noqueue for default virtio-net interface:
- crun on Fedora 32:
[root@300cd72baa94 /]# ip li sh eth0 3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0
- kata-runtime:
[root@420e660f3870 /]# ip li sh eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff
This seems to come from runtime/vendor/github.com/vishvananda/netlink/qdisc.go. I don't know the exact reason, fq_codel might be a sane default choice for almost any environment, and I didn't observe any breakage due to this.
- nodad on IPv6 addresses:
- crun:
[root@300cd72baa94 /]# ip ad sh eth0 3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.88.0.27/16 brd 10.88.255.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::dc87:26ff:fe08:c325/64 scope link valid_lft forever preferred_lft forever
- kata-runtime:
[root@420e660f3870 /]# ip ad sh eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff inet 10.88.0.28/16 brd 10.88.255.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::84ab:39ff:fe73:7cb8/64 scope link nodad valid_lft forever preferred_lft forever
This was introduced by: https://github.com/kata-containers/agent/pull/722/commits/c66b9279cc8ee27397... the reason is clearly explained in comments and was also reiterated by Archana last week, and totally makes sense to me in the general case. I wonder, in this particular case, if we can really assume that any upper stack or any environment can actually guarantee DAD is not needed, but I also couldn't observe any issue due to this, so far. --
My concern is not so much related to networking components themselves at this stage, but rather, more in general, to what might happen with peculiar choices for other subsystems and userspace expectations. I don't have concrete examples about breakages and, again, this is not about build-time kernel configuration.
If I understood correctly, bergwolf's proposal from last week goes in the direction of having some kind of facility or approach that allows us to track divergences introduced in kata-runtime compared to regular containers (or to the current configuration of the host kernel), to configure those behaviours and to systematically document particular behaviours. Hi Stefano,
My main concern about making guest kernel behave like the host kernel
Wait, that's not what I'm proposing or suggesting in any way, I'm just concerned that we might, more or less knowingly, over time, grow a set of rather specific *runtime* divergences in kernel configuration with what one might reasonably expect from a container environment, and end up breaking reasonable user or userspace expectations without keeping track and without an easy way to fix those expectations later.
Going back to my (harmless) examples in this perspective:
- fq_codel as default qdisc: I don't know why it's there, git log didn't help, I'm not sure anybody can even remember or explain
Maybe Manohar can answer that? But it is quite possible that he just picked one and didn't think a container workload should care that much dependency. So there is chance to enhance it to be configurable if necessary.
- nodad on IPv6 addresses: makes sense in a general kata-runtime case, it's well documented in the agent code, but will it be visible enough to others playing with this ecosystem? Should it be configurable, in case one can't ensure it's not needed from "outside"?
Same here. We can make it configurable but we need a use case to prove the need first.
is that we might lose the ability to have a customized/optimized kernel just for container use case.
Sure, I'm just trying to find out if there's a way to consistently document (and perhaps make configurable) these customisations or optimisations, at least the most significant ones, also to make them more sustainable in some sense.
In that case, I'm all with you! Better documentation always helps!
There are a lot of kernel config options that are not going to be useful for container workload. So instead of just using the host kernel (for kata containers), I would suggest just using a minimal guest kernel as a basis and start adding new config options/modules as we identify new needs. And that is what we have been doing for Kata Containers in the past years.
Actually, I see this as a separate topic, and it wasn't my intention to raise this here.
Sorry I missed last AC meeting and thought you were continuing that topic from the previous AC meeting ;)
As a side note, I understand the reasoning behind your suggestion, and I also think Ariel's and your further points are valid in specific situations, that are currently covered by specific Linux distributions.
I think configuration fragments are useful, and so is the current script, for testing, integration, and as guidance. It's also something a distribution might decide to ship and integrate, I guess.
However, attempts to somehow force that model as the True and Only one looks severely limiting to me, and I don't really see the benefit (more on that in my other reply on this thread).
As far as dynamic kernel options, is it enough to use kernel boot options + sysctls + kernel modules to meet the requirement? I asked it in the AC meeting but there was no conclusion yet.
Yes, I didn't present a requirement in this sense, because I don't have any specific requirement "to be supported" in mind. I was referring to a more general topic of traceability of specific behaviours.
Now, if I reuse your input as something that we should consider to include in those specific behaviours, yes, ideally I think that:
- boot options that are not explicitly appended via toml configuration, and that are obviously not functional for the specific container scenario (kata-runtime) should be somewhat tracked
- sysctl writes (I guess by the agent), same
- modules, would the agent really need to "manually" load specific modules? Will it ever need to?
Not just the agent. I was thinking that apps might want to use modules too. We do allow container with SYS_ADMIN privilege.
Both examples that I presented are about configurations that happen via netlink, so I'd consider that in a tentative list of the kind of configuration that needs to be tracked (and, I'm not sure, perhaps made configurable in any case?).
I'm not sure if we want to make every option configurable in any case. Right now we are looking at it case by case based on real world use cases. Cheers, Tao -- Into something rich and strange.
participants (4)
-
Ariel Adam
-
Dr. David Alan Gilbert
-
Peng Tao
-
Stefano Brivio