[kata-dev] Performance isolation: expectations for number of CPUs

Christophe de Dinechin dinechin at redhat.com
Thu Jul 8 09:13:08 UTC 2021


On 2021-07-07 at 15:56 UTC, "Adams, Eric" <eric.adams at intel.com> wrote...
> Chris,
>
>> By the way, if my understanding is correct, the correct response to my
> question is G.
>
> Yes in your example I would pick G. However, having a VM always hotplug 4 vCPU's
> by default seems kind of high but in your example G is what I would do.

As I pointed out in an earlier response to Eric, the value 4 was only for
the sake of illustration, as I found it more "special" than 1, so easier to
pick up in the A-G examples.

>
>> I would say, to the contrary, that the expectation in that case is that you
> get "as much as you can", since this is the runc behaviour. In other words, if I
> compile my Linux kernel and request 5 CPUs, I'm happy if I get 5 CPUs, and I'm
> happier if I get 96. However, I cannot be satisfied to get only one.
>
> I tried to look at this from different points of view.  If you don't own the
> cluster and request 5 CPU's and set no limit you hope you can get more but no
> guarantee. As a developer I probably would never set a limit and only set my
> minimum request always hoping for more. The sys admin might enforce limits. I
> wonder how this really works in practice. With Kata I personally feel it is ok
> if the request becomes the limit even if runc can run unbounded.

Again, the problem is that we do not _have_ the request value, because k8s
does not send it to the runtime at the moment. So the question is what we do
without that value.

> In the runc
> case you get at least 5 CPU's and perhaps the entire system at a slow time but
> with Kata you would get at least 5 vCPU + some fragment of a vCPU unused by the
> VM or agent and never have the opportunity to burst higher than your request. I
> am assuming the default vCPU is 1 and the behavior is that Kata would hotplug at
> least the requested number of vCPU's and the original 1 vCPU would have a little
> spare processing power.

Unfortunately, this is the assumption that is currently incorrect because we
don't honor requests without limits. Instead, we get only 1 VCPU (or
whatever was set as default VCPUs in the configuration file).

> I should note that runc running unbounded in multi
> socketed systems can sometimes be a lot slower than if you just ran on a limited
> amount of CPUs confined to a NUMA region. I thought I once found a workload
> where Kata with 1/4 the CPU's ran 2x faster than and equivalent runc.

This is one reason we might want to let sysadmins define the max number of
CPUs allocated to a VM. NUMA awareness is another topic, though, worth
exploring too.

> Once I
> reduced the amount of processors to stay within a NUMA boundary the runc ran
> much much faster.  That was a lot words for why I think having different
> behavior between kata and runc in the unbounded "no limit" case is ok as long as
> the minimum expectation of the request is met.

Precisely. The whole point of the discussion is that the request is not met.

>
> Thanks
> Eric
>
>
> -----Original Message-----
> From: Christophe de Dinechin <dinechin at redhat.com>
> Sent: Wednesday, July 7, 2021 3:34 AM
> To: Adams, Eric <eric.adams at intel.com>
> Cc: Eric Ernst <eric.g.ernst at gmail.com>; kata-dev at lists.katacontainers.io
> Subject: Re: [kata-dev] Performance isolation: expectations for number of CPUs
>
>
> On 2021-07-07 at 01:31 UTC, "Adams, Eric" <eric.adams at intel.com> wrote...
>> Christophe,
>>
>> I re-read through the limits, requests, resource quotas, and pod
>> overhead Kubernetes page and imagined what a dev running container
>> workloads would expect and also considered what a sys admin would
>> worry about.  I had some time this afternoon so I tried some
>> experiments to better understand how the requests/limits actually
>> works. I've been meaning to dig into this for my own understanding
>> from the user perspective.  After thinking through these different
>> scenarios, one thing I would consider changing is making the
>> configuration.toml default vCPU setting a podman or docker only
>> setting. For Kubernetes I would just ignore that field and hotplug
>> CPU's based on requests and limits set in the Kubernetes yaml files
>> with a default of 1vCPU at minimum. There might be a good reason for
> Kubernetes to allow someone to set the default base vCPU higher than 1, but I
> can't think of a scenario now.
>
> The default vCPU size is orthogonal, and somewhat related to pod overhead.
>
> Additional host CPUs could be beneficial to handle I/Os (they cost more in a VM
> typically) or to give more scheduling freedom. We discussed such a scenario in
> the performance isolation use case meeting when the agent itself starts using a
> lot of CPU e.g. to process container console output. That is probably not the
> only case.
>
>>
>> Here is what I observe. I was going back and forth on this email all day so I
> hope I didn't make a typo mistake on what I observed.
>>
>> For normal Kubernetes with no Kata.
>> 1) Limit is the max amount of CPUs that you will get, and the max
>> performance your workload can achieve. Request is the minimum amount
>> of CPU's that you are guaranteed to receive. The Request and Limits
>> are the sum of all container requests/limits in a pod.
>> 2) If you set a request so big (say 100 cpus) then the pod gets stuck in
> pending which is expected.
>> 3) If you set a request but no limit and no LimitRange is set for the
>> namespace you are in then you get all the CPU resource of the cluster.
>> I tried this with compiling the linux kernel and indeed I did max out
>> all 96 cores when using a request of just 5 with no limit.
>
> This is consistent with my understanding. For a normal runtime, the CPU request
> is only used to schedule on a sufficiently large node (and presumably one where
> there is enough leftover CPU capacity, i.e. I assume that if you can't fit four
> cpu requests of 32 on a 96-CPU host.
>
> By the way, if my understanding is correct, the correct response to my question
> is G.
>
>
>>
>> For Kata 2.1.1 this is what I observe
>> 1) If you set no requests and no limits than no extra vCPU's is hot
>> plugged. I set a request of 5 with no limit with runc and it used all
>> 96 cores. With Kata I only got 1 total vCPU for the pod.  I don't
>> think this is the same as issue 2130 below but is what the user
>> reported in 2071.  Since the request is a minimum I think it should add up all
> the requests and hotplug that as a minimum.
>
> Yes, this is 2071.
>
>
>> 2) I also observed that Pod Overhead isn't used in the calculation for
>> hotplugged CPU's.
>
> That seems correct. The pod overhead accounts for the overhead "outside" the VM,
> i.e. virtiofsd, qemu's own memory needs, the extra cost of doing I/Os, etc. So
> this is additional resources the host needs, not the VM.
>
>
>> It seems that in the scenario where you have a Request and no limit
>> you would expect to get at least enough vCPU to ensure the workload
>> meets the request.  If you don't specify any request or limit you get
>> one total vCPU. If you were to compare this to a runc pod it would get
>> at least 1 CPU but more than likely get a lot more CPU. I doubt this
>> happens much because probably most people enforce a limit if one is
>> not set which would cover the Kata case.
>
> The problem is that we get a really bad output if the limit is not set. And
> having to set a limit to improve performance is counter-intuitive.
>
>
>> 3) In my cluster, the Pod Overhead was 250m CPU for the kata-qemu
>> namespace.  If I request/limit 9 CPU for one container then I get 10
>> in the pod.  If I request/limit 9.25 CPU for one container then I get
>> 11 in the pod. If I request/limit 9.75 CPU for one container I also
>> get 11 in the pod. Finally, if I request/limit 9.90 CPU for one
>> container then I get 11 in the pod.  It seems like in the case where
>> you request 9.25 vCPU and Pod Overhead is 0.25 vCPU then it seems you
>> could get away with only having 10vCPU in the pod. That probably isn't a huge
> overhead to have one extra vCPU hot plugged.
>
> All this is correct if you consider the interpretation of overhead I gave you
> above. We simply round up to the next higher number of VCPUs, and add one for
> the agent.
>
>
>>
>> I should probably do the below in a table format. Right now I don't
>> see that pod overhead is used in the hot plug calculation. I don't
>> know that it matters since you get 1 vCPU by default and any
>> additional limits are hotplugged in. However, in the case where no
>> requests or limits are set then 1vCPU is used for the pod. In that
>> scenario your workload would have an estimated max of 750m CPU
>>
>> Different scenarios to consider
>> 1) Pod with multiple containers with request/limit set for everything
>>
>> In this scenario you would add up all the limits and hotplug at least
>> that many vCPU's. That seems to work already for Kata. If I start two
>> containers each with a limit of 3 then I end up with 7 vCPU's in the pod.
>
> That is indeed correct, except that it takes the value of "limit" and not the
> value of "request" to set the number of VCPUs. In a sense, it is a good thing,
> since it means if you have a request of 5 and a limit of 7, you get 7 CPUs.
>
>>
>> 2) Pod with multiple containers with only request set
>>
>> In this case I would hotplug the sum of all CPU requests. In the runc
>> scenario it would use the entire system, but with Kata the request
>> would become the limit. I feel that if someone set a request with no
>> limit then there is no expectation for it to go higher than the
>> request. This warrants further discussion though.
>
> I would say, to the contrary, that the expectation in that case is that you get
> "as much as you can", since this is the runc behaviour. In other words, if I
> compile my Linux kernel and request 5 CPUs, I'm happy if I get 5 CPUs, and I'm
> happier if I get 96. However, I cannot be satisfied to get only one.
>
> The problem is that as far as understand today, we don't get the required
> information to be able to do the right thing here (except as a side effect
> through some annotation for CRI-O). Apparently, k8s assumes that if it does not
> need to pass the information to runc, then it does not need to pass the
> information at all.
>
>>
>> 3) Pod with multiple containers with only limit set
>>
>> This works already how I would expect. Kata adds all the limits and
>> hotplugs that in.
>
> Yes.
>
>>
>> 4) Pod with multiple containers with some limits set in one container
>> but requests set in another container
>>
>> This one is tricky. I think inside the pod there are cgroups where you
>> could limit one container to a slice of CPU/memory but have not looked
>> into that. Logically I would expect that you would take all the
>> containers with limits and add that number of vCPU's to all the
>> containers that only have requests.  Ex: Container1 2
>> requests/unspecified limit Container2 unspecified
>> request/3 limit For this I would hotplug 5 vCPU's and ensure container
>> 1 gets at least 2 vCPU in the pod. Container 2 would get whatever it
>> gets when container 1 isn't busy.
>>
>> 5) Pod with multiple container with limits on one container but
>> requests/limits not set on another
>>
>> Another tricky one. For this one what makes sense to me is to add up
>> the limits and hotplug that number of vCPUs. That seems to be the case
>> now. The container with nothing set would get whatever it gets and
>> that would likely be less than a vCPU when container1 is busy.
>>
>> 6) Pod with multiple containers with no request/limits set
>>
>> In this case nothing is hot plugged.
>>
>> At the very least not having enough vCPU's for a container that
>> requested a certain minimum amount is a bug.
>
> Yes, at least relative to runc.
>
>>
>> Thanks
>> Eric
>>
>> -----Original Message-----
>> From: Christophe de Dinechin <dinechin at redhat.com>
>> Sent: Thursday, July 1, 2021 8:02 AM
>> To: Eric Ernst <eric.g.ernst at gmail.com>
>> Cc: kata-dev <kata-dev at lists.katacontainers.io>
>> Subject: Re: [kata-dev] Performance isolation: expectations for number
>> of CPUs
>>
>>
>>
>>> On 1 Jul 2021, at 16:55, Eric Ernst <eric.g.ernst at gmail.com> wrote:
>>>
>>> In this example, you’ll want to clarify what the CPUs requested means. I’ll
> assume limits=request, and that you’re referring to a kubernetes pod.
>>
>> Indeed, see the linked issues for examples. Also, ideally, we would like this
> to work with request but no limit.
>>
>>> Based on that I’d expect 12. I would not recommend four default vCPUs though.
>>
>> The 3, 4 and 5 were just examples to get different numbers as an
>> output. I chose
>>> four for the VM initial VCPUs to illustrate that we may have a
>>> possible workaround for request.vcpu=4 not doing anything.
>>
>>>
>>> Eric
>>>
>>> Sent from my iPhone
>>>
>>>> On Jul 1, 2021, at 3:54 AM, Christophe de Dinechin <dinechin at redhat.com>
> wrote:
>>>>
>>>> An interesting question arose about the number of CPUs we want to
>>>> get in the VM, notably in the context of
>>>> https://github.com/kata-containers/kata-containers/issues/2071 as
>>>> well as regarding
>>>> https://github.com/kata-containers/kata-containers/pull/2131, a fix for
> https://github.com/kata-containers/kata-containers/issues/2130.
>>>>
>>>> Let's say that we have two containers A and B requesting 5 and 3 CPUs
> respectively. How many CPUs should we get in the VM? Let us assume that the
> default number of VCPUs is 4.
>>>>
>>>> Possible answers:
>>>>
>>>> A) 4 (default number of VCPUs), because the current OCI spec does
>>>> not give us information about the number of CPUs. That's how I
>>>> interpret Julio's answer,
> https://github.com/kata-containers/kata-containers/issues/2071#issuecomment-865034753.
>>>>
>>>> B) 5 (maximum request). This seems to be more or less how the Rust agent
> behaves today, making sure that each time there is a request, we online at least
> that many CPUs.
>>>>
>>>> C) 8 (sum of requests for all containers). If the two containers
>>>> request CPUs, they have good reasons to, so we should honor both
>>>> requests independently. This seems to be what the runtime has in
>>>> mind, since it hotplugs the new CPUs and the struct VM field is called
> "cpusDelta"
>>>>
>>>> D) 6 (maximum request, plus one for the agent). During the performance
> isolation meetings, we seem to have shifted towards the idea that the agent
> should get a dedicated CPU.
>>>>
>>>> E) 9 (maximum request, plus one for the agent)
>>>>
>>>> F) 9 (maximum request plus the four original)
>>>>
>>>> G) 12 (sum of requests plus the four original)
>>>>
>>>> H) 42 (the correct answer in most cases)
>>>>
>>>> What do you think?
>>>>
>>>>
>>>> _______________________________________________
>>>> kata-dev mailing list
>>>> kata-dev at lists.katacontainers.io
>>>> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
>>>
>>
>>
>> _______________________________________________
>> kata-dev mailing list
>> kata-dev at lists.katacontainers.io
>> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
>> _______________________________________________
>> kata-dev mailing list
>> kata-dev at lists.katacontainers.io
>> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev


--
Cheers,
Christophe de Dinechin (IRC c3d)




More information about the kata-dev mailing list