[kata-dev] Performance isolation: expectations for number of CPUs

Eric Ernst eric.g.ernst at gmail.com
Wed Jul 7 02:04:30 UTC 2021


Thanks for the explanation Eric.

The high level summary on why you're seeing this:
 Kata works well, and correctly imo, when a pod is 'guaranteed', as well as
'burstable' if a limit is set. We do not 'properly' support bestEffort or
unbound burstable (today).

In the configurations I have used of Kubernetes, a default limit is usually
applied (or we validate that a limit is set) at pod admission time, so all
is well behaved. This is what I'd recommend for other users as well.


Having said that, I *think* if we can sanely and safely deduce that a limit
was not set, we can and should size the VM accordingly (# vCPU matching
that of the host). AFAIU we should be able to identify the scenario if, for
a given container in the pod:
    unbound :=   cfs quota == -1 && cpusets are not specified

This function would need some refactoring, and a lot of unit tests:
https://github.com/kata-containers/kata-containers/blob/7d37fbfdfba05be660b11f2b4d6545dfdfbecd63/src/runtime/virtcontainers/sandbox.go#L1884

--Eric



On Tue, Jul 6, 2021 at 6:31 PM Adams, Eric <eric.adams at intel.com> wrote:

> Christophe,
>
> I re-read through the limits, requests, resource quotas, and pod overhead
> Kubernetes page and imagined what a dev running container workloads would
> expect and also considered what a sys admin would worry about.  I had some
> time this afternoon so I tried some experiments to better understand how
> the requests/limits actually works. I've been meaning to dig into this for
> my own understanding from the user perspective.  After thinking through
> these different scenarios, one thing I would consider changing  is making
> the configuration.toml default vCPU setting a podman or docker only
> setting. For Kubernetes I would just ignore that field and hotplug CPU's
> based on requests and limits set in the Kubernetes yaml files with a
> default of 1vCPU at minimum. There might be a good reason for Kubernetes to
> allow someone to set the default base vCPU higher than 1, but I can't think
> of a scenario now.
>
> Here is what I observe. I was going back and forth on this email all day
> so I hope I didn't make a typo mistake on what I observed.
>
> For normal Kubernetes with no Kata.
> 1) Limit is the max amount of CPUs that you will get, and the max
> performance your workload can achieve. Request is the minimum amount of
> CPU's that you are guaranteed to receive. The Request and Limits are the
> sum of all container requests/limits in a pod.
> 2) If you set a request so big (say 100 cpus) then the pod gets stuck in
> pending which is expected.
> 3) If you set a request but no limit and no LimitRange is set for the
> namespace you are in then you get all the CPU resource of the cluster. I
> tried this with compiling the linux kernel and indeed I did max out all 96
> cores when using a request of just 5 with no limit.
>
> For Kata 2.1.1 this is what I observe
> 1) If you set no requests and no limits than no extra vCPU's is hot
> plugged. I set a request of 5 with no limit with runc and it used all 96
> cores. With Kata I only got 1 total vCPU for the pod.  I don't think this
> is the same as issue 2130 below but is what the user reported in 2071.
>  Since the request is a minimum I think it should add up all the requests
> and hotplug that as a minimum.
> 2) I also observed that Pod Overhead isn't used in the calculation for
> hotplugged CPU's.  It seems that in the scenario where you have a Request
> and no limit you would expect to get at least enough vCPU to ensure the
> workload meets the request.  If you don't specify any request or limit you
> get one total vCPU. If you were to compare this to a runc pod it would get
> at least 1 CPU but more than likely get a lot more CPU. I doubt this
> happens much because probably most people enforce a limit if one is not set
> which would cover the Kata case.
> 3)  In my cluster, the Pod Overhead was 250m CPU for the kata-qemu
> namespace.  If I request/limit 9 CPU for one container then I get 10 in the
> pod.  If I request/limit 9.25 CPU for one container then I get 11 in the
> pod. If I request/limit 9.75 CPU for one container I also get 11 in the
> pod. Finally, if I request/limit 9.90 CPU for one container then I get 11
> in the pod.    It seems like in the case where you request 9.25 vCPU and
> Pod Overhead is 0.25 vCPU then it seems you could get away with only having
> 10vCPU in the pod. That probably isn't a huge overhead to have one extra
> vCPU hot plugged.
>
> I should probably do the below in a table format. Right now I don't see
> that pod overhead is used in the hot plug calculation. I don't know that it
> matters since you get 1 vCPU by default and any additional limits are
> hotplugged in. However, in the case where no requests or limits are set
> then 1vCPU is used for the pod. In that scenario your workload would have
> an estimated max of 750m CPU
>
> Different scenarios to consider
> 1) Pod with multiple containers with request/limit set for everything
>
> In this scenario you would add up all the limits and hotplug at least that
> many vCPU's. That seems to work already for Kata. If I start two containers
> each with a limit of 3 then I end up with 7 vCPU's in the pod.
>
> 2) Pod with multiple containers with only request set
>
> In this case I would hotplug the sum of all CPU requests. In the runc
> scenario it would use the entire system, but with Kata the request would
> become the limit. I feel that if someone set a request with no limit then
> there is no expectation for it to go higher than the request. This warrants
> further discussion though.
>
> 3) Pod with multiple containers with only limit set
>
> This works already how I would expect. Kata adds all the limits and
> hotplugs that in.
>
> 4) Pod with multiple containers with some limits set in one container but
> requests set in another container
>
> This one is tricky. I think inside the pod there are cgroups where you
> could limit one container to a slice of CPU/memory but have not looked into
> that. Logically I would expect that you would take all the containers with
> limits and add that number of vCPU's to all the containers that only have
> requests.  Ex: Container1  2 requests/unspecified limit   Container2
> unspecified request/3 limit   For this I would hotplug 5 vCPU's and ensure
> container 1 gets at least 2 vCPU in the pod. Container 2 would get whatever
> it gets when container 1 isn't busy.
>
> 5) Pod with multiple container with limits on one container but
> requests/limits not set on another
>
> Another tricky one. For this one what makes sense to me is to add up the
> limits and hotplug that number of vCPUs. That seems to be the case now. The
> container with nothing set would get whatever it gets and that would likely
> be less than a vCPU when container1 is busy.
>
> 6) Pod with multiple containers with no request/limits set
>
> In this case nothing is hot plugged.
>
> At the very least not having enough vCPU's for a container that requested
> a certain minimum amount is a bug.
>
> Thanks
> Eric
>
> -----Original Message-----
> From: Christophe de Dinechin <dinechin at redhat.com>
> Sent: Thursday, July 1, 2021 8:02 AM
> To: Eric Ernst <eric.g.ernst at gmail.com>
> Cc: kata-dev <kata-dev at lists.katacontainers.io>
> Subject: Re: [kata-dev] Performance isolation: expectations for number of
> CPUs
>
>
>
> > On 1 Jul 2021, at 16:55, Eric Ernst <eric.g.ernst at gmail.com> wrote:
> >
> > In this example, you’ll want to clarify what the CPUs requested means.
> I’ll assume limits=request, and that you’re referring to a kubernetes pod.
>
> Indeed, see the linked issues for examples. Also, ideally, we would like
> this to work with request but no limit.
>
> > Based on that I’d expect 12. I would not recommend four default vCPUs
> though.
>
> The 3, 4 and 5 were just examples to get different numbers as an output. I
> chose four for the VM initial VCPUs to illustrate that we may have a
> possible workaround for request.vcpu=4 not doing anything.
>
> >
> > Eric
> >
> > Sent from my iPhone
> >
> >> On Jul 1, 2021, at 3:54 AM, Christophe de Dinechin <dinechin at redhat.com>
> wrote:
> >>
> >> An interesting question arose about the number of CPUs we want to get
> in the VM, notably in the context of
> https://github.com/kata-containers/kata-containers/issues/2071 as well as
> regarding https://github.com/kata-containers/kata-containers/pull/2131, a
> fix for https://github.com/kata-containers/kata-containers/issues/2130.
> >>
> >> Let's say that we have two containers A and B requesting 5 and 3 CPUs
> respectively. How many CPUs should we get in the VM? Let us assume that the
> default number of VCPUs is 4.
> >>
> >> Possible answers:
> >>
> >> A) 4 (default number of VCPUs), because the current OCI spec does not
> give us information about the number of CPUs. That's how I interpret
> Julio's answer,
> https://github.com/kata-containers/kata-containers/issues/2071#issuecomment-865034753
> .
> >>
> >> B) 5 (maximum request). This seems to be more or less how the Rust
> agent behaves today, making sure that each time there is a request, we
> online at least that many CPUs.
> >>
> >> C) 8 (sum of requests for all containers). If the two containers
> request CPUs, they have good reasons to, so we should honor both requests
> independently. This seems to be what the runtime has in mind, since it
> hotplugs the new CPUs and the struct VM field is called "cpusDelta"
> >>
> >> D) 6 (maximum request, plus one for the agent). During the performance
> isolation meetings, we seem to have shifted towards the idea that the agent
> should get a dedicated CPU.
> >>
> >> E) 9 (maximum request, plus one for the agent)
> >>
> >> F) 9 (maximum request plus the four original)
> >>
> >> G) 12 (sum of requests plus the four original)
> >>
> >> H) 42 (the correct answer in most cases)
> >>
> >> What do you think?
> >>
> >>
> >> _______________________________________________
> >> kata-dev mailing list
> >> kata-dev at lists.katacontainers.io
> >> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
> >
>
>
> _______________________________________________
> kata-dev mailing list
> kata-dev at lists.katacontainers.io
> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.katacontainers.io/pipermail/kata-dev/attachments/20210706/f5c44cfb/attachment-0001.html>


More information about the kata-dev mailing list