[kata-dev] not-so-common dynamic (not build) kernel configurations: examples and summary

Ariel Adam aadam at redhat.com
Wed May 6 05:25:39 UTC 2020


On Wed, May 6, 2020 at 5:37 AM Peng Tao <tao.peng at linux.alibaba.com> wrote:

>
>
> On 2020/5/5 21:27, Stefano Brivio wrote:
> > Hi,
> >
> > I mentioned in the architecture call last week two examples of dynamic
> > kernel configuration appearing by default with kata-runtime that came
> > somewhat as a surprise to me, in particular compared to what we'd get
> > with a regular container (e.g. crun). Sorry for the delay in sharing
> > details, here they come.
> >
> > --
> > Examples:
> > - fq_codel instead of noqueue for default virtio-net interface:
> >
> >    - crun on Fedora 32:
> >
> > [root at 300cd72baa94 /]# ip li sh eth0
> > 3: eth0 at if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP mode DEFAULT group default
> >      link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> >
> >    - kata-runtime:
> >
> > [root at 420e660f3870 /]# ip li sh eth0
> > 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
> UP mode DEFAULT group default qlen 1000
> >      link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff
> >
> > This seems to come from
> > runtime/vendor/github.com/vishvananda/netlink/qdisc.go. I don't know the
> > exact reason, fq_codel might be a sane default choice for almost any
> > environment, and I didn't observe any breakage due to this.
> >
> > - nodad on IPv6 addresses:
> >
> >    - crun:
> >
> > [root at 300cd72baa94 /]# ip ad sh eth0
> > 3: eth0 at if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP group default
> >      link/ether de:87:26:08:c3:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> >      inet 10.88.0.27/16 brd 10.88.255.255 scope global eth0
> >         valid_lft forever preferred_lft forever
> >      inet6 fe80::dc87:26ff:fe08:c325/64 scope link
> >         valid_lft forever preferred_lft forever
> >
> >    - kata-runtime:
> >
> > [root at 420e660f3870 /]# ip ad sh eth0
> > 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
> UP group default qlen 1000
> >      link/ether 86:ab:39:73:7c:b8 brd ff:ff:ff:ff:ff:ff
> >      inet 10.88.0.28/16 brd 10.88.255.255 scope global eth0
> >         valid_lft forever preferred_lft forever
> >      inet6 fe80::84ab:39ff:fe73:7cb8/64 scope link nodad
> >         valid_lft forever preferred_lft forever
> >
> > This was introduced by:
> >
> https://github.com/kata-containers/agent/pull/722/commits/c66b9279cc8ee273973878240c255e9d8fe8e552
> > the reason is clearly explained in comments and was also reiterated by
> > Archana last week, and totally makes sense to me in the general case. I
> > wonder, in this particular case, if we can really assume that any upper
> > stack or any environment can actually guarantee DAD is not needed, but
> > I also couldn't observe any issue due to this, so far.
> > --
> >
> > My concern is not so much related to networking components themselves
> > at this stage, but rather, more in general, to what might happen with
> > peculiar choices for other subsystems and userspace expectations. I
> > don't have concrete examples about breakages and, again, this is not
> > about build-time kernel configuration.
> >
> > If I understood correctly, bergwolf's proposal from last week goes in
> > the direction of having some kind of facility or approach that allows
> > us to track divergences introduced in kata-runtime compared to regular
> > containers (or to the current configuration of the host kernel), to
> > configure those behaviours and to systematically document particular
> > behaviours.
> Hi Stefano,
>
> My main concern about making guest kernel behave like the host kernel is
> that we might lose the ability to have a customized/optimized kernel
> just for container use case. There are a lot of kernel config options
> that are not going to be useful for container workload. So instead of
> just using the host kernel (for kata containers), I would suggest just
> using a minimal guest kernel as a basis and start adding new config
> options/modules as we identify new needs. And that is what we have been
> doing for Kata Containers in the past years.
>
>
Production wise there is a lot of value in having the same kernel on the
host and the guest.
For example, taking a workload that has been run as a vanila container and
then running it on a kata container could require a testing/certification
process from scratch if the host/guest kernels are different.
Kernel CVEs would also be better managed if the host/guest kernels are the
same.



> As far as dynamic kernel options, is it enough to use kernel boot
> options + sysctls + kernel modules to meet the requirement? I asked it
> in the AC meeting but there was no conclusion yet. Most containers
> shouldn't care about specific kernel options. If kernel boot options +
> sysctls + kernel modules still cannot satisfy it, that should be a very
> specialized workload and IMO such users (who would likely be very
> professional at customizing things) can use a customized guest kernel
> instead of the standard one shipped with Kata Containers.
>
> Just my 2 cents...
>
> Cheers,
> Tao
>
> >
> > I'm rather new to this, so I don't have a clear picture of how this
> > might apply, in practice, should we choose to start from those two
> > cases as examples. Feedback and further input are warmly welcome!
> >
>
> --
> Into something rich and strange.
>
> _______________________________________________
> kata-dev mailing list
> kata-dev at lists.katacontainers.io
> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.katacontainers.io/pipermail/kata-dev/attachments/20200506/db1dd6d7/attachment.html>


More information about the kata-dev mailing list