<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple" style="word-wrap:break-word">
<div class="WordSection1">
<p class="MsoNormal">Eric,<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">You summarized it well. The unbounded burstable is probably not something allowed in many large clusters anyways which is why I don’t think this is a broadly impactful issue. I think though with a bit of tweaking Kata could both deliver
on the user expectation and correctly inform and prepare the sys admin of the Kata overhead. I think all of these mini discussions could help with a future Kata tuning guide for different use cases. <o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Eric<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b>From:</b> Eric Ernst <eric.g.ernst@gmail.com> <br>
<b>Sent:</b> Tuesday, July 6, 2021 7:05 PM<br>
<b>To:</b> Adams, Eric <eric.adams@intel.com><br>
<b>Cc:</b> Christophe de Dinechin <dinechin@redhat.com>; kata-dev <kata-dev@lists.katacontainers.io><br>
<b>Subject:</b> Re: [kata-dev] Performance isolation: expectations for number of CPUs<o:p></o:p></p>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<p class="MsoNormal">Thanks for the explanation Eric. <o:p></o:p></p>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">The high level summary on why you're seeing this: <o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"> Kata works well, and correctly imo, when a pod is 'guaranteed', as well as 'burstable' if a limit is set. We do not 'properly' support bestEffort or unbound burstable (today).<o:p></o:p></p>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">In the configurations I have used of Kubernetes, a default limit is usually applied (or we validate that a limit is set) at pod admission time, so all is well behaved. This is what I'd recommend for other users as well.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Having said that, I *think* if we can sanely and safely deduce that a limit was not set, we can and should size the VM accordingly (# vCPU matching that of the host). AFAIU we should be able to identify the scenario if, for a given container
in the pod:<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"> unbound := cfs quota == -1 && cpusets are not specified <o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">This function would need some refactoring, and a lot of unit tests: <a href="https://github.com/kata-containers/kata-containers/blob/7d37fbfdfba05be660b11f2b4d6545dfdfbecd63/src/runtime/virtcontainers/sandbox.go#L1884">https://github.com/kata-containers/kata-containers/blob/7d37fbfdfba05be660b11f2b4d6545dfdfbecd63/src/runtime/virtcontainers/sandbox.go#L1884</a><o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">--Eric<o:p></o:p></p>
</div>
<div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
</div>
</div>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal">On Tue, Jul 6, 2021 at 6:31 PM Adams, Eric <<a href="mailto:eric.adams@intel.com">eric.adams@intel.com</a>> wrote:<o:p></o:p></p>
</div>
<blockquote style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in">
<p class="MsoNormal">Christophe,<br>
<br>
I re-read through the limits, requests, resource quotas, and pod overhead Kubernetes page and imagined what a dev running container workloads would expect and also considered what a sys admin would worry about. I had some time this afternoon so I tried some
experiments to better understand how the requests/limits actually works. I've been meaning to dig into this for my own understanding from the user perspective. After thinking through these different scenarios, one thing I would consider changing is making
the configuration.toml default vCPU setting a podman or docker only setting. For Kubernetes I would just ignore that field and hotplug CPU's based on requests and limits set in the Kubernetes yaml files with a default of 1vCPU at minimum. There might be a
good reason for Kubernetes to allow someone to set the default base vCPU higher than 1, but I can't think of a scenario now.
<br>
<br>
Here is what I observe. I was going back and forth on this email all day so I hope I didn't make a typo mistake on what I observed.
<br>
<br>
For normal Kubernetes with no Kata.<br>
1) Limit is the max amount of CPUs that you will get, and the max performance your workload can achieve. Request is the minimum amount of CPU's that you are guaranteed to receive. The Request and Limits are the sum of all container requests/limits in a pod.<br>
2) If you set a request so big (say 100 cpus) then the pod gets stuck in pending which is expected.
<br>
3) If you set a request but no limit and no LimitRange is set for the namespace you are in then you get all the CPU resource of the cluster. I tried this with compiling the linux kernel and indeed I did max out all 96 cores when using a request of just 5 with
no limit.<br>
<br>
For Kata 2.1.1 this is what I observe<br>
1) If you set no requests and no limits than no extra vCPU's is hot plugged. I set a request of 5 with no limit with runc and it used all 96 cores. With Kata I only got 1 total vCPU for the pod. I don't think this is the same as issue 2130 below but is what
the user reported in 2071. Since the request is a minimum I think it should add up all the requests and hotplug that as a minimum.
<br>
2) I also observed that Pod Overhead isn't used in the calculation for hotplugged CPU's. It seems that in the scenario where you have a Request and no limit you would expect to get at least enough vCPU to ensure the workload meets the request. If you don't
specify any request or limit you get one total vCPU. If you were to compare this to a runc pod it would get at least 1 CPU but more than likely get a lot more CPU. I doubt this happens much because probably most people enforce a limit if one is not set which
would cover the Kata case.<br>
3) In my cluster, the Pod Overhead was 250m CPU for the kata-qemu namespace. If I request/limit 9 CPU for one container then I get 10 in the pod. If I request/limit 9.25 CPU for one container then I get 11 in the pod. If I request/limit 9.75 CPU for one
container I also get 11 in the pod. Finally, if I request/limit 9.90 CPU for one container then I get 11 in the pod. It seems like in the case where you request 9.25 vCPU and Pod Overhead is 0.25 vCPU then it seems you could get away with only having 10vCPU
in the pod. That probably isn't a huge overhead to have one extra vCPU hot plugged.<br>
<br>
I should probably do the below in a table format. Right now I don't see that pod overhead is used in the hot plug calculation. I don't know that it matters since you get 1 vCPU by default and any additional limits are hotplugged in. However, in the case where
no requests or limits are set then 1vCPU is used for the pod. In that scenario your workload would have an estimated max of 750m CPU<br>
<br>
Different scenarios to consider<br>
1) Pod with multiple containers with request/limit set for everything<br>
<br>
In this scenario you would add up all the limits and hotplug at least that many vCPU's. That seems to work already for Kata. If I start two containers each with a limit of 3 then I end up with 7 vCPU's in the pod.
<br>
<br>
2) Pod with multiple containers with only request set<br>
<br>
In this case I would hotplug the sum of all CPU requests. In the runc scenario it would use the entire system, but with Kata the request would become the limit. I feel that if someone set a request with no limit then there is no expectation for it to go higher
than the request. This warrants further discussion though. <br>
<br>
3) Pod with multiple containers with only limit set<br>
<br>
This works already how I would expect. Kata adds all the limits and hotplugs that in.<br>
<br>
4) Pod with multiple containers with some limits set in one container but requests set in another container<br>
<br>
This one is tricky. I think inside the pod there are cgroups where you could limit one container to a slice of CPU/memory but have not looked into that. Logically I would expect that you would take all the containers with limits and add that number of vCPU's
to all the containers that only have requests. Ex: Container1 2 requests/unspecified limit Container2 unspecified request/3 limit For this I would hotplug 5 vCPU's and ensure container 1 gets at least 2 vCPU in the pod. Container 2 would get whatever
it gets when container 1 isn't busy.<br>
<br>
5) Pod with multiple container with limits on one container but requests/limits not set on another<br>
<br>
Another tricky one. For this one what makes sense to me is to add up the limits and hotplug that number of vCPUs. That seems to be the case now. The container with nothing set would get whatever it gets and that would likely be less than a vCPU when container1
is busy.<br>
<br>
6) Pod with multiple containers with no request/limits set<br>
<br>
In this case nothing is hot plugged. <br>
<br>
At the very least not having enough vCPU's for a container that requested a certain minimum amount is a bug.<br>
<br>
Thanks<br>
Eric<br>
<br>
-----Original Message-----<br>
From: Christophe de Dinechin <<a href="mailto:dinechin@redhat.com" target="_blank">dinechin@redhat.com</a>>
<br>
Sent: Thursday, July 1, 2021 8:02 AM<br>
To: Eric Ernst <<a href="mailto:eric.g.ernst@gmail.com" target="_blank">eric.g.ernst@gmail.com</a>><br>
Cc: kata-dev <<a href="mailto:kata-dev@lists.katacontainers.io" target="_blank">kata-dev@lists.katacontainers.io</a>><br>
Subject: Re: [kata-dev] Performance isolation: expectations for number of CPUs<br>
<br>
<br>
<br>
> On 1 Jul 2021, at 16:55, Eric Ernst <<a href="mailto:eric.g.ernst@gmail.com" target="_blank">eric.g.ernst@gmail.com</a>> wrote:<br>
> <br>
> In this example, you’ll want to clarify what the CPUs requested means. I’ll assume limits=request, and that you’re referring to a kubernetes pod.<br>
<br>
Indeed, see the linked issues for examples. Also, ideally, we would like this to work with request but no limit.<br>
<br>
> Based on that I’d expect 12. I would not recommend four default vCPUs though. <br>
<br>
The 3, 4 and 5 were just examples to get different numbers as an output. I chose four for the VM initial VCPUs to illustrate that we may have a possible workaround for request.vcpu=4 not doing anything.<br>
<br>
> <br>
> Eric<br>
> <br>
> Sent from my iPhone<br>
> <br>
>> On Jul 1, 2021, at 3:54 AM, Christophe de Dinechin <<a href="mailto:dinechin@redhat.com" target="_blank">dinechin@redhat.com</a>> wrote:<br>
>> <br>
>> An interesting question arose about the number of CPUs we want to get in the VM, notably in the context of
<a href="https://github.com/kata-containers/kata-containers/issues/2071" target="_blank">
https://github.com/kata-containers/kata-containers/issues/2071</a> as well as regarding
<a href="https://github.com/kata-containers/kata-containers/pull/2131" target="_blank">
https://github.com/kata-containers/kata-containers/pull/2131</a>, a fix for <a href="https://github.com/kata-containers/kata-containers/issues/2130" target="_blank">
https://github.com/kata-containers/kata-containers/issues/2130</a>.<br>
>> <br>
>> Let's say that we have two containers A and B requesting 5 and 3 CPUs respectively. How many CPUs should we get in the VM? Let us assume that the default number of VCPUs is 4.<br>
>> <br>
>> Possible answers:<br>
>> <br>
>> A) 4 (default number of VCPUs), because the current OCI spec does not give us information about the number of CPUs. That's how I interpret Julio's answer,
<a href="https://github.com/kata-containers/kata-containers/issues/2071#issuecomment-865034753" target="_blank">
https://github.com/kata-containers/kata-containers/issues/2071#issuecomment-865034753</a>.<br>
>> <br>
>> B) 5 (maximum request). This seems to be more or less how the Rust agent behaves today, making sure that each time there is a request, we online at least that many CPUs.<br>
>> <br>
>> C) 8 (sum of requests for all containers). If the two containers request CPUs, they have good reasons to, so we should honor both requests independently. This seems to be what the runtime has in mind, since it hotplugs the new CPUs and the struct VM field
is called "cpusDelta"<br>
>> <br>
>> D) 6 (maximum request, plus one for the agent). During the performance isolation meetings, we seem to have shifted towards the idea that the agent should get a dedicated CPU.<br>
>> <br>
>> E) 9 (maximum request, plus one for the agent)<br>
>> <br>
>> F) 9 (maximum request plus the four original)<br>
>> <br>
>> G) 12 (sum of requests plus the four original)<br>
>> <br>
>> H) 42 (the correct answer in most cases)<br>
>> <br>
>> What do you think? <br>
>> <br>
>> <br>
>> _______________________________________________<br>
>> kata-dev mailing list<br>
>> <a href="mailto:kata-dev@lists.katacontainers.io" target="_blank">kata-dev@lists.katacontainers.io</a><br>
>> <a href="http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev" target="_blank">
http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev</a><br>
> <br>
<br>
<br>
_______________________________________________<br>
kata-dev mailing list<br>
<a href="mailto:kata-dev@lists.katacontainers.io" target="_blank">kata-dev@lists.katacontainers.io</a><br>
<a href="http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev" target="_blank">http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev</a><o:p></o:p></p>
</blockquote>
</div>
</div>
</body>
</html>