Hi Xu, Thanks for the links! KSM is a tricky subject. Memory sharing from KSM enables cross-VM side channels, so we have to be careful not to share memory across trust boundaries. VMs that share the same encryption key can take advantage of KSM between themselves, but not between other tenant VMs. Perhaps a compromise would be to use the same key for all VMs created on behalf of the same user in order to get some KSM benefit without the risk of disclosing information to untrusted tenant VMs. I’m not familiar with VM clone. Is there any documentation or source files I could browse to learn more? On the topic of trade-offs, there is also a small (1-6%) memory access latency due to the encryption. The worst case is 6% for latency-sensitive workloads (e.g. the SPECint mcf test), but the average latency overhead (measured across all SPECint tests) is ~1.4%. Pages that are marked as unencrypted in the guest page tables are unaffected. I wasn’t able to join the call today, but I can join the next one if you like. Sincerely, Jesse From: Xu Wang [mailto:xu@hyper.sh] Sent: Wednesday, February 21, 2018 6:17 PM To: kata-dev@lists.katacontainers.io Cc: Larrew, Jesse <Jesse.Larrew@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Hi Jesse, As Sebastien said, the memory encrypt is an exciting feature for VMs. Based on the discuss of runtime merging proposal [1], the kata-runtime 1.0 will based on the runV hypervisor drivers [2] and vm factory [3]. And we should consider if this should be enable on all qemu based drivers if CPU supports this feature, or implemented as a new hypervisor which let users decide. Looks like, by enabling the memory encryption, we will get - stronger isolation we want - bigger memory footprint (vm clone and dax won't work, and ksm won't work as well?) Will read the references in detail after today's meeting. Will you attend the online meeting[4], Jesse? [1] https://github.com/kata-containers/runtime/issues/33 [2] https://github.com/hyperhq/runv/tree/master/hypervisor [3] https://github.com/hyperhq/runv/tree/master/factory [4] https://etherpad.openstack.org/p/katacontainers-2018-architecture-committee-... -Xu [https://track.mixmax.com/api/track/v2/3FndH1zO4SHg5tYq3/gIoNnLyVGc5hGQ1hnI] On Thu, Feb 22, 2018 6:48 AM, Larrew, Jesse Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com> wrote: Hi Sebastien, Thanks for the pointer to the virtcontainers PR. I'll keep an eye on it as you suggested. That's interesting that you plan to use vanilla qemu for Kata Containers. Do you intend to upstream the "nofw" and "static-prt" accelerators from qemu-lite? Or will those optimizations be abandoned? They seem like a clever solution for reducing the guest boot time. Sincerely, Jesse
-----Original Message-----
From: Boeuf, Sebastien [mailto:sebastien.boeuf@intel.com]
Sent: Wednesday, February 21, 2018 4:31 PM
To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; kata-
dev@lists.katacontainers.io<mailto:dev@lists.katacontainers.io>
Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; Kaplan, David
<David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>>
Subject: RE: Kata with AMD Secure Encrypted Virtualization (SEV)
Hi Jesse,
This is very exciting that you have been able to get Clear Containers working
with this new AMD technology. I am sure this is something Kata Containers
will need since the goal of this project is to run virtualized and secure
containers for any architecture able to do so. There are currently some
discussions about using virtcontainers as the core API/library for Kata
Containers runtime, and as an example of the way you could contribute to
this, here is the link to the recent PR that has been raised to bring simple
support for ARM architecture (through Qemu):
If it gets confirmed that virtcontainers will become part of the Kata
Containers runtime, then I would suggest that you could raise a similar PR in
order to support AMD.
Don't forget to add some documentation about constraints and limitations
for users that might want to use your AMD support, especially regarding the
host kernel version. As long as you manage to get the Qemu patches merged
into upstream, everything will be fine since the goal is to rely on a vanilla
version of Qemu for Kata.
Thanks,
Sebastien