[kata-dev] kernel build configuration, was: Re: not-so-common dynamic (not build) kernel configurations: examples and summary
Peng Tao
tao.peng at linux.alibaba.com
Wed May 6 16:12:51 UTC 2020
On 2020/5/6 22:18, Dr. David Alan Gilbert wrote:
> * Peng Tao (tao.peng at linux.alibaba.com) wrote:
>>
>>
>> On 2020/5/6 21:35, Dr. David Alan Gilbert wrote:
>>> * Peng Tao (tao.peng at linux.alibaba.com) wrote:
>>>>
>>>>
>>>> On 2020/5/6 19:54, Stefano Brivio wrote:
>>>>> On Wed, 6 May 2020 14:11:48 +0800
>>>>> Peng Tao <tao.peng at linux.alibaba.com> wrote:
>>>>>
>>>>>> On 2020/5/6 13:25, Ariel Adam wrote:
>>>>>>>
>>>>>>> On Wed, May 6, 2020 at 5:37 AM Peng Tao <tao.peng at linux.alibaba.com
>>>>>>> <mailto:tao.peng at linux.alibaba.com>> wrote:
>>>>>>>
>>>>>>> My main concern about making guest kernel behave like the host
>>>>>>> kernel is
>>>>>>> that we might lose the ability to have a customized/optimized kernel
>>>>>>> just for container use case. There are a lot of kernel config options
>>>>>>> that are not going to be useful for container workload. So instead of
>>>>>>> just using the host kernel (for kata containers), I would suggest just
>>>>>>> using a minimal guest kernel as a basis and start adding new config
>>>>>>> options/modules as we identify new needs. And that is what we have been
>>>>>>> doing for Kata Containers in the past years.
>>>>>>>
>>>>>>> Production wise there is a lot of value in having the same kernel on the
>>>>>>> host and the guest.
>>>>>>> For example, taking a workload that has been run as a vanila container
>>>>>>> and then running it on a kata container could require a
>>>>>>> testing/certification process from scratch if the host/guest kernels are
>>>>>>> different.
>>>>>>> Kernel CVEs would also be better managed if the host/guest kernels are
>>>>>>> the same.
>>>>>> From production experience, it is much easier to upgrade a guest kernel
>>>>>> than waiting for the host kernel to be upgraded. So I would suggest that
>>>>>> we do not bind Kata Containers kernel to a host's running kernel.
>>>>>
>>>>> I think nobody is suggesting that they should be forcefully bound, but
>>>>> still I see that usage as a very reasonable possibility (especially for
>>>>> the reasons Ariel mentioned), and that already works to a very good
>>>>> extent.
>>>>>
>>>> As I mentioned, we do provide methods for users to configure to use the host
>>>> kernel for Kata Containers. So the possibility is possible even now.
>>>>
>>>>>> Also feature-wise, we can use a newer kernel to run Kata Containers on
>>>>>> hosts that are running older kernels. So users running their good old
>>>>>> kernels can still make use of new kernel features with Kata Containers.
>>>>>
>>>>> Well, there is actually a reason why they're running older (or newer!)
>>>>> kernels, and that might apply to kata-runtime as well.
>>>>>
>>>> Yes. Again, it is already possible to use the same kernel for both host and
>>>> guest. So noting is broken for them.
>>>>
>>>>>> And it makes sense to ship the same kernel for different distributions
>>>>>> in order to provide same user experience. And we only need to validate
>>>>>> and maintain one guest kernel for all distributions, which is much
>>>>>> easier than validating each kernel for each distribution version.
>>>>>
>>>>> While I understand the reasoning behind this, it won't apply in every
>>>>> situation. For example, if there's a security flaw in the kernel, this
>>>>> would have the obvious drawback of requiring two packages (from a
>>>>> distribution perspective) to be upgraded at the same time. There are
>>>>> specific advantages and degrees of consistency both ways.
>>>>
>>>> Yes I agree that there is no one-solution-for-all. That is why we have so
>>>> many configuration options. It is just about what we enable by default.
>>>>
>>>>>
>>>>> Also mind that Kata Containers doesn't really ship a kernel (neither
>>>>> binary nor source). It ships (useful!) configuration fragments and a
>>>>> script, but you can't control the compiler or the toolchain, or even
>>>>> whether "-g nvidia" or "-g intel" is passed to build-kernel.sh, so,
>>>>> while the scripting undoubtedly takes some burden off the testing
>>>>> effort, I don't see much value going beyond that. This is not the kind
>>>>> of "validation" a distribution does -- which by the way makes perfect
>>>>> sense to me. Let the distribution do that :)
>>>>>
>>>> It is not just about testing burden. We would want users to have a minimal
>>>> kernel memory footprint. That is why Kata Containers shipped guest kernel is
>>>> customized to be very small and only contains what we think is necessary for
>>>> most container workloads. A distribution host kernel is more general and
>>>> tends to enable many kernel options that are not going to be useful for a
>>>> container workload guest.
>>>>
>>>> Speaking of letting distributions validate the guest kernel, if a
>>>> distribution provides a version of kernel that specially targets a cloud use
>>>> case, it would be a much better fit for Kata Containers, although it is
>>>> still a different kernel package than the host one.
>>>
>>> Do we understand which kernel config options are explicit choices by
>>> Kata and which are just down to the config that Kata started with?
>>>
>> It was based on clear containers kernel config in the beginning [1]. Maybe
>> Intel folks can tell more about where the clear containers one came from?
>>
>> (Copying Geronimo Orozco who originally committed the clear containers
>> kernel config per [2])
>
> But I think if the choices was documented, then it would be much easier
> to justify why a distro might want a specific set of configs for Kata
> use.
No, I don't think we have documented any of it. It mostly depended on
developer experience to form the current kernel config. And TBH, Redhat
developers are much better at doing it. So we would love your feedback
on which kernel options should be turned on/off.
From distribution point of view, I totally understand your motivation.
It is really your call if you want to distribute Kata's default kernel
or use any different one for your users.
From upstream point of view, the upstream code makes sure that it is
easy for distributions to customize. If a customization proves to be
useful, it may as well be turned on by default in upstream code base.
Taking the default kernel as a example, IMO it is totally possible for
the upstream code to switch to a RHEL or CentOS kernel if that turns out
to be what most users want.
Cheers,
Tao
>
> Dave
>
>>
>> Cheers,
>> Tao
>>
>> [1] https://github.com/kata-containers/linux/pull/5
>> [2] https://github.com/clearcontainers/packaging/commit/f6c9474aa93435ad05d6259e6e9793ce5467222d
>>
>> --
>> Into something rich and strange.
>>
> --
> Dr. David Alan Gilbert / dgilbert at redhat.com / Manchester, UK
>
--
Into something rich and strange.
More information about the kata-dev
mailing list