Kata with AMD Secure Encrypted Virtualization (SEV)
Hi all, The virtualization instructions in the latest AMD EPYC server processors have been enhanced with a memory encryption feature that could provide projects like Kata Containers with unique security capabilities compared to their non-virtualized counterparts. We call this feature Secure Encrypted Virtualization (or SEV) and would be interested in collaborating with the Kata Container community to enable support for containers with encrypted memory. In short, we've added an inline AES engine to our memory controller that encrypts data written to system DRAM and decrypts data read from DRAM. The encryption keys are generated from a TRNG in the onboard AMD Secure Processor (SP) and programmed into the memory controller as needed in a manner that is never visible to software. Additionally, our virtualization instructions have been enhanced to be able to associate a VM ASID with a unique encryption key, so each VM (or container) can keep the contents of its memory confidential from the host and/or other tenant VMs/containers. The guest kernel can choose which pages to encrypt and which to share with the host by setting a bit in the guest page tables, which puts the guest in complete control of the visibility of their data in the cloud. More information can be found in our Memory Encryption whitepaper [1] and in the Architecture Programmer's Manual [2]. Linux kernel support for SEV has been merged into the 4.15 and upcoming 4.16 kernels. OVMF BIOS support has been merged as well. The qemu changes are still being upstreamed, but the patches are available for testing on github [3]. With the above support in place, we have developed a proof-of-concept demo that is based on Clear Containers. Since the Clear Containers project had already done the heavy lifting to run container workloads inside of a VM, it was rather straightforward to add support to encrypt those VMs using SEV. The required changes are summarized below: * Container kernel: - Add SEV support patches from the Linux kernel repo in [3]. - Force virtio to use the DMA API (and hence SWIOTLB) when adding/removing buffers to/from the virtio ring buffer. - SEV requires a memory copy in order to perform the encryption, so zero-copy solutions using DAX for the container initial user space will not work. + Build in a small initramfs to use as the guest kernel initial user space. + Include the updated container agent binary and supporting libs (~14MB total). * Container agent: - Update the agent not to use the pivot_root() method from the initramfs environment, and perform the pivot to the container workload filesystem manually instead. * Container runtime: - Add the new qemu command line options for starting an SEV guest. * Qemu-lite: - Add the SEV support patches from the qemu repo in [3]. With the above changes, we are able to start docker containers inside of SEV-protected VMs: amd@pecanporter:~/src/git$ sudo docker run -ti --runtime sev-runtime busybox sh / # whoami root / # dmesg | grep SEV [ 0.001000] AMD Secure Encrypted Virtualization (SEV) active [ 0.219196] SEV is active and system is using DMA bounce buffers As a check, dumping the contents of a page from the qemu heap reveals plaintext data: amd@pecanporter:~/src/git$ sudo dd if=/proc/$(pgrep qemu)/mem bs=4096 count=1 skip=23058854513 | xxd | tail dd: /proc/38572/mem: cannot skip to specified offset 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.8437e-05 s, 46.3 MB/s 00000f60: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000f70: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000f80: 0000 0000 0000 0000 7100 0000 0000 0000 ........q....... 00000f90: 2f72 756e 2f76 6972 7463 6f6e 7461 696e /run/virtcontain 00000fa0: 6572 732f 706f 6473 2f33 3565 3233 6565 ers/pods/35e23ee 00000fb0: 3330 6466 6237 3266 3135 3730 6265 3432 30dfb72f1570be42 00000fc0: 6665 3165 6331 3366 3331 3332 6138 6133 fe1ec13f3132a8a3 00000fd0: 6463 3336 6463 3131 6235 6365 3837 6236 dc36dc11b5ce87b6 00000fe0: 3437 3930 3736 6339 612f 636f 6e73 6f6c 479076c9a/consol 00000ff0: 652e 736f 636b 0000 0104 0000 0000 0000 e.sock.......... However, any attempt to read the container memory from the host produces only ciphertext: amd@pecanporter:~/src/git$ sudo dd if=/proc/$(pgrep qemu)/mem bs=4096 count=1 skip=34165702144 | xxd | head dd: /proc/38572/mem: cannot skip to specified offset 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.9039e-05 s, 46.0 MB/s 00000000: e9b8 e14d c063 ee18 fd85 5ecc 4d1f c1a2 ...M.c....^.M... 00000010: d681 cdf2 259b a97e c43b 5cde bf9e 695b ....%..~.;\...i[ 00000020: db3c 778b 8e77 89f4 f795 e5a6 9ebb 765b .<w..w........v[ 00000030: 0905 e1d3 c7ec 6f2b bada ed15 b2e0 db7f ......o+........ 00000040: d5e9 6d15 cf28 0ca1 4a45 3b9a 1779 e3ff ..m..(..JE;..y.. 00000050: 9ee0 b562 2311 6e5a e972 4c06 3f6a 6ebf ...b#.nZ.rL.?jn. 00000060: 909a 88ea 737a 6226 5d87 8968 b31b d096 ....szb&]..h.... 00000070: 9360 cbb0 4f34 d811 89a7 048f 01e8 d19e .`..O4.......... 00000080: 5429 995a 4de0 6fba 3360 8bb4 a2dc 17e4 T).ZM.o.3`...... 00000090: 80f5 6657 9fd7 0347 e78d 4d13 6b6c c649 ..fW...G..M.kl.I Our threat model is to allow container workloads to reduce their risk exposure to security vulnerabilities in the hosting environment, which seems to overlap nicely with the threat model of Kata Containers. Is this a feature that the Kata community would find useful? If so, we would be very interested to work with the community to enable SEV memory encryption for Kata Containers. Any and all feedback is welcome! Thanks! References: [1] AMD Memory Encryption Whitepaper: http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_En... [2] AMD64 Architecture Programmer's Manual Volume 2: System Programming: http://developer.amd.com/wordpress/media/2012/10/24593_APM_v21.pdf [3] AMD SEV github repo: https://github.com/AMDESE/AMDSEV Sincerely, Jesse Larrew MTS Software Security Architect AMD Security Architecture R&D jesse.larrew@amd.com O: +(1) 512-602-0092 (x50092) M: +(1) 512-791-4852
Hi Jesse, This is very exciting that you have been able to get Clear Containers working with this new AMD technology. I am sure this is something Kata Containers will need since the goal of this project is to run virtualized and secure containers for any architecture able to do so. There are currently some discussions about using virtcontainers as the core API/library for Kata Containers runtime, and as an example of the way you could contribute to this, here is the link to the recent PR that has been raised to bring simple support for ARM architecture (through Qemu): https://github.com/containers/virtcontainers/pull/614 If it gets confirmed that virtcontainers will become part of the Kata Containers runtime, then I would suggest that you could raise a similar PR in order to support AMD. Don't forget to add some documentation about constraints and limitations for users that might want to use your AMD support, especially regarding the host kernel version. As long as you manage to get the Qemu patches merged into upstream, everything will be fine since the goal is to rely on a vanilla version of Qemu for Kata. Thanks, Sebastien ________________________________________ From: Larrew, Jesse [Jesse.Larrew@amd.com] Sent: Wednesday, February 21, 2018 2:06 PM To: kata-dev@lists.katacontainers.io Cc: Hollingsworth, Brent; Kaplan, David; Woller, Thomas Subject: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Hi all, The virtualization instructions in the latest AMD EPYC server processors have been enhanced with a memory encryption feature that could provide projects like Kata Containers with unique security capabilities compared to their non-virtualized counterparts. We call this feature Secure Encrypted Virtualization (or SEV) and would be interested in collaborating with the Kata Container community to enable support for containers with encrypted memory. In short, we've added an inline AES engine to our memory controller that encrypts data written to system DRAM and decrypts data read from DRAM. The encryption keys are generated from a TRNG in the onboard AMD Secure Processor (SP) and programmed into the memory controller as needed in a manner that is never visible to software. Additionally, our virtualization instructions have been enhanced to be able to associate a VM ASID with a unique encryption key, so each VM (or container) can keep the contents of its memory confidential from the host and/or other tenant VMs/containers. The guest kernel can choose which pages to encrypt and which to share with the host by setting a bit in the guest page tables, which puts the guest in complete control of the visibility of their data in the cloud. More information can be found in our Memory Encryption whitepaper [1] and in the Architecture Programmer's Manual [2]. Linux kernel support for SEV has been merged into the 4.15 and upcoming 4.16 kernels. OVMF BIOS support has been merged as well. The qemu changes are still being upstreamed, but the patches are available for testing on github [3]. With the above support in place, we have developed a proof-of-concept demo that is based on Clear Containers. Since the Clear Containers project had already done the heavy lifting to run container workloads inside of a VM, it was rather straightforward to add support to encrypt those VMs using SEV. The required changes are summarized below: * Container kernel: - Add SEV support patches from the Linux kernel repo in [3]. - Force virtio to use the DMA API (and hence SWIOTLB) when adding/removing buffers to/from the virtio ring buffer. - SEV requires a memory copy in order to perform the encryption, so zero-copy solutions using DAX for the container initial user space will not work. + Build in a small initramfs to use as the guest kernel initial user space. + Include the updated container agent binary and supporting libs (~14MB total). * Container agent: - Update the agent not to use the pivot_root() method from the initramfs environment, and perform the pivot to the container workload filesystem manually instead. * Container runtime: - Add the new qemu command line options for starting an SEV guest. * Qemu-lite: - Add the SEV support patches from the qemu repo in [3]. With the above changes, we are able to start docker containers inside of SEV-protected VMs: amd@pecanporter:~/src/git$ sudo docker run -ti --runtime sev-runtime busybox sh / # whoami root / # dmesg | grep SEV [ 0.001000] AMD Secure Encrypted Virtualization (SEV) active [ 0.219196] SEV is active and system is using DMA bounce buffers As a check, dumping the contents of a page from the qemu heap reveals plaintext data: amd@pecanporter:~/src/git$ sudo dd if=/proc/$(pgrep qemu)/mem bs=4096 count=1 skip=23058854513 | xxd | tail dd: /proc/38572/mem: cannot skip to specified offset 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.8437e-05 s, 46.3 MB/s 00000f60: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000f70: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000f80: 0000 0000 0000 0000 7100 0000 0000 0000 ........q....... 00000f90: 2f72 756e 2f76 6972 7463 6f6e 7461 696e /run/virtcontain 00000fa0: 6572 732f 706f 6473 2f33 3565 3233 6565 ers/pods/35e23ee 00000fb0: 3330 6466 6237 3266 3135 3730 6265 3432 30dfb72f1570be42 00000fc0: 6665 3165 6331 3366 3331 3332 6138 6133 fe1ec13f3132a8a3 00000fd0: 6463 3336 6463 3131 6235 6365 3837 6236 dc36dc11b5ce87b6 00000fe0: 3437 3930 3736 6339 612f 636f 6e73 6f6c 479076c9a/consol 00000ff0: 652e 736f 636b 0000 0104 0000 0000 0000 e.sock.......... However, any attempt to read the container memory from the host produces only ciphertext: amd@pecanporter:~/src/git$ sudo dd if=/proc/$(pgrep qemu)/mem bs=4096 count=1 skip=34165702144 | xxd | head dd: /proc/38572/mem: cannot skip to specified offset 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.9039e-05 s, 46.0 MB/s 00000000: e9b8 e14d c063 ee18 fd85 5ecc 4d1f c1a2 ...M.c....^.M... 00000010: d681 cdf2 259b a97e c43b 5cde bf9e 695b ....%..~.;\...i[ 00000020: db3c 778b 8e77 89f4 f795 e5a6 9ebb 765b .<w..w........v[ 00000030: 0905 e1d3 c7ec 6f2b bada ed15 b2e0 db7f ......o+........ 00000040: d5e9 6d15 cf28 0ca1 4a45 3b9a 1779 e3ff ..m..(..JE;..y.. 00000050: 9ee0 b562 2311 6e5a e972 4c06 3f6a 6ebf ...b#.nZ.rL.?jn. 00000060: 909a 88ea 737a 6226 5d87 8968 b31b d096 ....szb&]..h.... 00000070: 9360 cbb0 4f34 d811 89a7 048f 01e8 d19e .`..O4.......... 00000080: 5429 995a 4de0 6fba 3360 8bb4 a2dc 17e4 T).ZM.o.3`...... 00000090: 80f5 6657 9fd7 0347 e78d 4d13 6b6c c649 ..fW...G..M.kl.I Our threat model is to allow container workloads to reduce their risk exposure to security vulnerabilities in the hosting environment, which seems to overlap nicely with the threat model of Kata Containers. Is this a feature that the Kata community would find useful? If so, we would be very interested to work with the community to enable SEV memory encryption for Kata Containers. Any and all feedback is welcome! Thanks! References: [1] AMD Memory Encryption Whitepaper: http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_En... [2] AMD64 Architecture Programmer's Manual Volume 2: System Programming: http://developer.amd.com/wordpress/media/2012/10/24593_APM_v21.pdf [3] AMD SEV github repo: https://github.com/AMDESE/AMDSEV Sincerely, Jesse Larrew MTS Software Security Architect AMD Security Architecture R&D jesse.larrew@amd.com O: +(1) 512-602-0092 (x50092) M: +(1) 512-791-4852 _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
Hi Sebastien, Thanks for the pointer to the virtcontainers PR. I'll keep an eye on it as you suggested. That's interesting that you plan to use vanilla qemu for Kata Containers. Do you intend to upstream the "nofw" and "static-prt" accelerators from qemu-lite? Or will those optimizations be abandoned? They seem like a clever solution for reducing the guest boot time. Sincerely, Jesse
-----Original Message----- From: Boeuf, Sebastien [mailto:sebastien.boeuf@intel.com] Sent: Wednesday, February 21, 2018 4:31 PM To: Larrew, Jesse <Jesse.Larrew@amd.com>; kata- dev@lists.katacontainers.io Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Woller, Thomas <thomas.woller@amd.com> Subject: RE: Kata with AMD Secure Encrypted Virtualization (SEV)
Hi Jesse,
This is very exciting that you have been able to get Clear Containers working with this new AMD technology. I am sure this is something Kata Containers will need since the goal of this project is to run virtualized and secure containers for any architecture able to do so. There are currently some discussions about using virtcontainers as the core API/library for Kata Containers runtime, and as an example of the way you could contribute to this, here is the link to the recent PR that has been raised to bring simple support for ARM architecture (through Qemu): https://github.com/containers/virtcontainers/pull/614
If it gets confirmed that virtcontainers will become part of the Kata Containers runtime, then I would suggest that you could raise a similar PR in order to support AMD.
Don't forget to add some documentation about constraints and limitations for users that might want to use your AMD support, especially regarding the host kernel version. As long as you manage to get the Qemu patches merged into upstream, everything will be fine since the goal is to rely on a vanilla version of Qemu for Kata.
Thanks, Sebastien ________________________________________ From: Larrew, Jesse [Jesse.Larrew@amd.com] Sent: Wednesday, February 21, 2018 2:06 PM To: kata-dev@lists.katacontainers.io Cc: Hollingsworth, Brent; Kaplan, David; Woller, Thomas Subject: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Hi all,
The virtualization instructions in the latest AMD EPYC server processors have been enhanced with a memory encryption feature that could provide projects like Kata Containers with unique security capabilities compared to their non-virtualized counterparts. We call this feature Secure Encrypted Virtualization (or SEV) and would be interested in collaborating with the Kata Container community to enable support for containers with encrypted memory.
In short, we've added an inline AES engine to our memory controller that encrypts data written to system DRAM and decrypts data read from DRAM. The encryption keys are generated from a TRNG in the onboard AMD Secure Processor (SP) and programmed into the memory controller as needed in a manner that is never visible to software. Additionally, our virtualization instructions have been enhanced to be able to associate a VM ASID with a unique encryption key, so each VM (or container) can keep the contents of its memory confidential from the host and/or other tenant VMs/containers. The guest kernel can choose which pages to encrypt and which to share with the host by setting a bit in the guest page tables, which puts the guest in complete control of the visibility of their data in the cloud. More information can be found in our Memory Encryption whitepaper [1] and in the Architecture Programmer's Manual [2].
Linux kernel support for SEV has been merged into the 4.15 and upcoming 4.16 kernels. OVMF BIOS support has been merged as well. The qemu changes are still being upstreamed, but the patches are available for testing on github [3].
With the above support in place, we have developed a proof-of-concept demo that is based on Clear Containers. Since the Clear Containers project had already done the heavy lifting to run container workloads inside of a VM, it was rather straightforward to add support to encrypt those VMs using SEV. The required changes are summarized below: * Container kernel: - Add SEV support patches from the Linux kernel repo in [3]. - Force virtio to use the DMA API (and hence SWIOTLB) when adding/removing buffers to/from the virtio ring buffer. - SEV requires a memory copy in order to perform the encryption, so zero-copy solutions using DAX for the container initial user space will not work. + Build in a small initramfs to use as the guest kernel initial user space. + Include the updated container agent binary and supporting libs (~14MB total). * Container agent: - Update the agent not to use the pivot_root() method from the initramfs environment, and perform the pivot to the container workload filesystem manually instead. * Container runtime: - Add the new qemu command line options for starting an SEV guest. * Qemu-lite: - Add the SEV support patches from the qemu repo in [3].
With the above changes, we are able to start docker containers inside of SEV- protected VMs:
amd@pecanporter:~/src/git$ sudo docker run -ti --runtime sev-runtime busybox sh / # whoami root / # dmesg | grep SEV [ 0.001000] AMD Secure Encrypted Virtualization (SEV) active [ 0.219196] SEV is active and system is using DMA bounce buffers
As a check, dumping the contents of a page from the qemu heap reveals plaintext data:
amd@pecanporter:~/src/git$ sudo dd if=/proc/$(pgrep qemu)/mem bs=4096 count=1 skip=23058854513 | xxd | tail dd: /proc/38572/mem: cannot skip to specified offset 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.8437e-05 s, 46.3 MB/s 00000f60: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000f70: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000f80: 0000 0000 0000 0000 7100 0000 0000 0000 ........q....... 00000f90: 2f72 756e 2f76 6972 7463 6f6e 7461 696e /run/virtcontain 00000fa0: 6572 732f 706f 6473 2f33 3565 3233 6565 ers/pods/35e23ee 00000fb0: 3330 6466 6237 3266 3135 3730 6265 3432 30dfb72f1570be42 00000fc0: 6665 3165 6331 3366 3331 3332 6138 6133 fe1ec13f3132a8a3 00000fd0: 6463 3336 6463 3131 6235 6365 3837 6236 dc36dc11b5ce87b6 00000fe0: 3437 3930 3736 6339 612f 636f 6e73 6f6c 479076c9a/consol 00000ff0: 652e 736f 636b 0000 0104 0000 0000 0000 e.sock..........
However, any attempt to read the container memory from the host produces only ciphertext:
amd@pecanporter:~/src/git$ sudo dd if=/proc/$(pgrep qemu)/mem bs=4096 count=1 skip=34165702144 | xxd | head dd: /proc/38572/mem: cannot skip to specified offset 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.9039e-05 s, 46.0 MB/s 00000000: e9b8 e14d c063 ee18 fd85 5ecc 4d1f c1a2 ...M.c....^.M... 00000010: d681 cdf2 259b a97e c43b 5cde bf9e 695b ....%..~.;\...i[ 00000020: db3c 778b 8e77 89f4 f795 e5a6 9ebb 765b .<w..w........v[ 00000030: 0905 e1d3 c7ec 6f2b bada ed15 b2e0 db7f ......o+........ 00000040: d5e9 6d15 cf28 0ca1 4a45 3b9a 1779 e3ff ..m..(..JE;..y.. 00000050: 9ee0 b562 2311 6e5a e972 4c06 3f6a 6ebf ...b#.nZ.rL.?jn. 00000060: 909a 88ea 737a 6226 5d87 8968 b31b d096 ....szb&]..h.... 00000070: 9360 cbb0 4f34 d811 89a7 048f 01e8 d19e .`..O4.......... 00000080: 5429 995a 4de0 6fba 3360 8bb4 a2dc 17e4 T).ZM.o.3`...... 00000090: 80f5 6657 9fd7 0347 e78d 4d13 6b6c c649 ..fW...G..M.kl.I
Our threat model is to allow container workloads to reduce their risk exposure to security vulnerabilities in the hosting environment, which seems to overlap nicely with the threat model of Kata Containers. Is this a feature that the Kata community would find useful? If so, we would be very interested to work with the community to enable SEV memory encryption for Kata Containers. Any and all feedback is welcome!
Thanks!
References: [1] AMD Memory Encryption Whitepaper: http://amd-dev.wpengine.netdna- cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepape r_v7-Public.pdf [2] AMD64 Architecture Programmer's Manual Volume 2: System Programming: http://developer.amd.com/wordpress/media/2012/10/24593_APM_v21.pdf [3] AMD SEV github repo: https://github.com/AMDESE/AMDSEV
Sincerely,
Jesse Larrew MTS Software Security Architect AMD Security Architecture R&D jesse.larrew@amd.com O: +(1) 512-602-0092 (x50092) M: +(1) 512-791-4852
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
Hi Jesse, As Sebastien said, the memory encrypt is an exciting feature for VMs. Based on the discuss of runtime merging proposal [1], the kata-runtime 1.0 will based on the runVhypervisor drivers [2] and vm factory [3]. And we should consider if this should be enable on all qemubased drivers if CPU supports this feature, or implemented as a new hypervisor which let usersdecide. Looks like, by enabling the memory encryption, we will get - stronger isolation we want- bigger memory footprint (vm clone and dax won't work, and ksm won't work as well?) Will read the references in detail after today's meeting. Will you attend the online meeting[4], Jesse? [1]https://github.com/kata-containers/runtime/issues/33[2] https://github.com/hyperhq/runv/tree/master/hypervisor[3] https://github.com/hyperhq/runv/tree/master/factory[4] https://etherpad.openstack.org/p/katacontainers-2018-architecture-committee-... -Xu On Thu, Feb 22, 2018 6:48 AM, Larrew, Jesse Jesse.Larrew@amd.com wrote: Hi Sebastien, Thanks for the pointer to the virtcontainers PR. I'll keep an eye on it as you suggested. That's interesting that you plan to use vanilla qemu for Kata Containers. Do you intend to upstream the "nofw" and "static-prt" accelerators from qemu-lite? Or will those optimizations be abandoned? They seem like a clever solution for reducing the guest boot time. Sincerely, Jesse
-----Original Message-----
From: Boeuf, Sebastien [mailto:sebastien.boeuf@intel.com]
Sent: Wednesday, February 21, 2018 4:31 PM
To: Larrew, Jesse <Jesse.Larrew@amd.com>; kata-
dev@lists.katacontainers.io
Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com>; Kaplan, David
<David.Kaplan@amd.com>; Woller, Thomas <thomas.woller@amd.com>
Subject: RE: Kata with AMD Secure Encrypted Virtualization (SEV)
Hi Jesse,
This is very exciting that you have been able to get Clear Containers working
with this new AMD technology. I am sure this is something Kata Containers
will need since the goal of this project is to run virtualized and secure
containers for any architecture able to do so. There are currently some
discussions about using virtcontainers as the core API/library for Kata
Containers runtime, and as an example of the way you could contribute to
this, here is the link to the recent PR that has been raised to bring simple
support for ARM architecture (through Qemu):
If it gets confirmed that virtcontainers will become part of the Kata
Containers runtime, then I would suggest that you could raise a similar PR in
order to support AMD.
Don't forget to add some documentation about constraints and limitations
for users that might want to use your AMD support, especially regarding the
host kernel version. As long as you manage to get the Qemu patches merged
into upstream, everything will be fine since the goal is to rely on a vanilla
version of Qemu for Kata.
Thanks,
Sebastien
________________________________________
From: Larrew, Jesse [Jesse.Larrew@amd.com]
Sent: Wednesday, February 21, 2018 2:06 PM
To: kata-dev@lists.katacontainers.io
Cc: Hollingsworth, Brent; Kaplan, David; Woller, Thomas
Subject: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Hi all,
The virtualization instructions in the latest AMD EPYC server processors have
been enhanced with a memory encryption feature that could provide
projects like Kata Containers with unique security capabilities compared to
their non-virtualized counterparts. We call this feature Secure Encrypted
Virtualization (or SEV) and would be interested in collaborating with the Kata
Container community to enable support for containers with encrypted
memory.
In short, we've added an inline AES engine to our memory controller that
encrypts data written to system DRAM and decrypts data read from DRAM.
The encryption keys are generated from a TRNG in the onboard AMD Secure
Processor (SP) and programmed into the memory controller as needed in a
manner that is never visible to software. Additionally, our virtualization
instructions have been enhanced to be able to associate a VM ASID with a
unique encryption key, so each VM (or container) can keep the contents of
its memory confidential from the host and/or other tenant VMs/containers.
The guest kernel can choose which pages to encrypt and which to share with
the host by setting a bit in the guest page tables, which puts the guest in
complete control of the visibility of their data in the cloud. More information
can be found in our Memory Encryption whitepaper [1] and in the
Architecture Programmer's Manual [2].
Linux kernel support for SEV has been merged into the 4.15 and upcoming
4.16 kernels. OVMF BIOS support has been merged as well. The qemu
changes are still being upstreamed, but the patches are available for testing
on github [3].
With the above support in place, we have developed a proof-of-concept
demo that is based on Clear Containers. Since the Clear Containers project
had already done the heavy lifting to run container workloads inside of a VM,
it was rather straightforward to add support to encrypt those VMs using SEV.
The required changes are summarized below:
* Container kernel:
- Add SEV support patches from the Linux kernel repo in [3].
- Force virtio to use the DMA API (and hence SWIOTLB) when
adding/removing buffers to/from the virtio ring buffer.
- SEV requires a memory copy in order to perform the encryption, so
zero-copy solutions using DAX for the container initial user space will not
work.
+ Build in a small initramfs to use as the guest kernel initial user
space.
+ Include the updated container agent binary and supporting libs
(~14MB total).
* Container agent:
- Update the agent not to use the pivot_root() method from the
initramfs environment, and perform the pivot to the container workload
filesystem manually instead.
* Container runtime:
- Add the new qemu command line options for starting an SEV guest.
* Qemu-lite:
- Add the SEV support patches from the qemu repo in [3].
With the above changes, we are able to start docker containers inside of SEV-
protected VMs:
amd@pecanporter:~/src/git$ sudo docker run -ti --runtime sev-runtime
busybox sh / # whoami root / # dmesg | grep SEV
[ 0.001000] AMD Secure Encrypted Virtualization (SEV) active
[ 0.219196] SEV is active and system is using DMA bounce buffers
As a check, dumping the contents of a page from the qemu heap reveals
plaintext data:
amd@pecanporter:~/src/git$ sudo dd if=/proc/$(pgrep qemu)/mem
bs=4096 count=1 skip=23058854513 | xxd | tail
dd: /proc/38572/mem: cannot skip to specified offset
1+0 records in
1+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 8.8437e-05 s, 46.3 MB/s
00000f60: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000f70: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000f80: 0000 0000 0000 0000 7100 0000 0000 0000 ........q.......
00000f90: 2f72 756e 2f76 6972 7463 6f6e 7461 696e /run/virtcontain
00000fa0: 6572 732f 706f 6473 2f33 3565 3233 6565 ers/pods/35e23ee
00000fb0: 3330 6466 6237 3266 3135 3730 6265 3432 30dfb72f1570be42
00000fc0: 6665 3165 6331 3366 3331 3332 6138 6133 fe1ec13f3132a8a3
00000fd0: 6463 3336 6463 3131 6235 6365 3837 6236 dc36dc11b5ce87b6
00000fe0: 3437 3930 3736 6339 612f 636f 6e73 6f6c 479076c9a/consol
00000ff0: 652e 736f 636b 0000 0104 0000 0000 0000 e.sock..........
However, any attempt to read the container memory from the host
produces only ciphertext:
amd@pecanporter:~/src/git$ sudo dd if=/proc/$(pgrep qemu)/mem
bs=4096 count=1 skip=34165702144 | xxd | head
dd: /proc/38572/mem: cannot skip to specified offset
1+0 records in
1+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 8.9039e-05 s, 46.0 MB/s
00000000: e9b8 e14d c063 ee18 fd85 5ecc 4d1f c1a2 ...M.c....^.M...
00000010: d681 cdf2 259b a97e c43b 5cde bf9e 695b ....%..~.;\...i[
00000020: db3c 778b 8e77 89f4 f795 e5a6 9ebb 765b .<w..w........v[
00000030: 0905 e1d3 c7ec 6f2b bada ed15 b2e0 db7f ......o+........
00000040: d5e9 6d15 cf28 0ca1 4a45 3b9a 1779 e3ff ..m..(..JE;..y..
00000050: 9ee0 b562 2311 6e5a e972 4c06 3f6a 6ebf ...b#.nZ.rL.?jn.
00000060: 909a 88ea 737a 6226 5d87 8968 b31b d096 ....szb&]..h....
00000070: 9360 cbb0 4f34 d811 89a7 048f 01e8 d19e .`..O4..........
00000080: 5429 995a 4de0 6fba 3360 8bb4 a2dc 17e4 T).ZM.o.3`......
00000090: 80f5 6657 9fd7 0347 e78d 4d13 6b6c c649 ..fW...G..M.kl.I
Our threat model is to allow container workloads to reduce their risk
exposure to security vulnerabilities in the hosting environment, which seems
to overlap nicely with the threat model of Kata Containers. Is this a feature
that the Kata community would find useful? If so, we would be very
interested to work with the community to enable SEV memory encryption
for Kata Containers. Any and all feedback is welcome!
Thanks!
References:
[1] AMD Memory Encryption Whitepaper:
cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepape
r_v7-Public.pdf
[2] AMD64 Architecture Programmer's Manual Volume 2: System
Programming:
http://developer.amd.com/wordpress/media/2012/10/24593_APM_v21.pdf
[3] AMD SEV github repo:
Sincerely,
Jesse Larrew
MTS Software Security Architect
AMD Security Architecture R&D
jesse.larrew@amd.com
O: +(1) 512-602-0092 (x50092)
M: +(1) 512-791-4852
_______________________________________________
kata-dev mailing list
kata-dev@lists.katacontainers.io
http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev -- Xu WangCTO & Cofounder, Hypergithub/twitter/wechat: @gnawuxhttp://hyper.sh Hyper_: Make VM run like container
Hi Xu, Thanks for the links! KSM is a tricky subject. Memory sharing from KSM enables cross-VM side channels, so we have to be careful not to share memory across trust boundaries. VMs that share the same encryption key can take advantage of KSM between themselves, but not between other tenant VMs. Perhaps a compromise would be to use the same key for all VMs created on behalf of the same user in order to get some KSM benefit without the risk of disclosing information to untrusted tenant VMs. I’m not familiar with VM clone. Is there any documentation or source files I could browse to learn more? On the topic of trade-offs, there is also a small (1-6%) memory access latency due to the encryption. The worst case is 6% for latency-sensitive workloads (e.g. the SPECint mcf test), but the average latency overhead (measured across all SPECint tests) is ~1.4%. Pages that are marked as unencrypted in the guest page tables are unaffected. I wasn’t able to join the call today, but I can join the next one if you like. Sincerely, Jesse From: Xu Wang [mailto:xu@hyper.sh] Sent: Wednesday, February 21, 2018 6:17 PM To: kata-dev@lists.katacontainers.io Cc: Larrew, Jesse <Jesse.Larrew@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Hi Jesse, As Sebastien said, the memory encrypt is an exciting feature for VMs. Based on the discuss of runtime merging proposal [1], the kata-runtime 1.0 will based on the runV hypervisor drivers [2] and vm factory [3]. And we should consider if this should be enable on all qemu based drivers if CPU supports this feature, or implemented as a new hypervisor which let users decide. Looks like, by enabling the memory encryption, we will get - stronger isolation we want - bigger memory footprint (vm clone and dax won't work, and ksm won't work as well?) Will read the references in detail after today's meeting. Will you attend the online meeting[4], Jesse? [1] https://github.com/kata-containers/runtime/issues/33 [2] https://github.com/hyperhq/runv/tree/master/hypervisor [3] https://github.com/hyperhq/runv/tree/master/factory [4] https://etherpad.openstack.org/p/katacontainers-2018-architecture-committee-... -Xu [https://track.mixmax.com/api/track/v2/3FndH1zO4SHg5tYq3/gIoNnLyVGc5hGQ1hnI] On Thu, Feb 22, 2018 6:48 AM, Larrew, Jesse Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com> wrote: Hi Sebastien, Thanks for the pointer to the virtcontainers PR. I'll keep an eye on it as you suggested. That's interesting that you plan to use vanilla qemu for Kata Containers. Do you intend to upstream the "nofw" and "static-prt" accelerators from qemu-lite? Or will those optimizations be abandoned? They seem like a clever solution for reducing the guest boot time. Sincerely, Jesse
-----Original Message-----
From: Boeuf, Sebastien [mailto:sebastien.boeuf@intel.com]
Sent: Wednesday, February 21, 2018 4:31 PM
To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; kata-
dev@lists.katacontainers.io<mailto:dev@lists.katacontainers.io>
Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; Kaplan, David
<David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>>
Subject: RE: Kata with AMD Secure Encrypted Virtualization (SEV)
Hi Jesse,
This is very exciting that you have been able to get Clear Containers working
with this new AMD technology. I am sure this is something Kata Containers
will need since the goal of this project is to run virtualized and secure
containers for any architecture able to do so. There are currently some
discussions about using virtcontainers as the core API/library for Kata
Containers runtime, and as an example of the way you could contribute to
this, here is the link to the recent PR that has been raised to bring simple
support for ARM architecture (through Qemu):
If it gets confirmed that virtcontainers will become part of the Kata
Containers runtime, then I would suggest that you could raise a similar PR in
order to support AMD.
Don't forget to add some documentation about constraints and limitations
for users that might want to use your AMD support, especially regarding the
host kernel version. As long as you manage to get the Qemu patches merged
into upstream, everything will be fine since the goal is to rely on a vanilla
version of Qemu for Kata.
Thanks,
Sebastien
On Thu, Feb 22, 2018 at 6:06 AM, Larrew, Jesse <Jesse.Larrew@amd.com> wrote:
Hi all,
The virtualization instructions in the latest AMD EPYC server processors have been enhanced with a memory encryption feature that could provide projects like Kata Containers with unique security capabilities compared to their non-virtualized counterparts. We call this feature Secure Encrypted Virtualization (or SEV) and would be interested in collaborating with the Kata Container community to enable support for containers with encrypted memory.
In short, we've added an inline AES engine to our memory controller that encrypts data written to system DRAM and decrypts data read from DRAM. The encryption keys are generated from a TRNG in the onboard AMD Secure Processor (SP) and programmed into the memory controller as needed in a manner that is never visible to software. Additionally, our virtualization instructions have been enhanced to be able to associate a VM ASID with a unique encryption key, so each VM (or container) can keep the contents of its memory confidential from the host and/or other tenant VMs/containers. The guest kernel can choose which pages to encrypt and which to share with the host by setting a bit in the guest page tables, which puts the guest in complete control of the visibility of their data in the cloud. More information can be found in our Memory Encryption whitepaper [1] and in the Architecture Programmer's Manual [2].
Linux kernel support for SEV has been merged into the 4.15 and upcoming 4.16 kernels. OVMF BIOS support has been merged as well. The qemu changes are still being upstreamed, but the patches are available for testing on github [3].
With the above support in place, we have developed a proof-of-concept demo that is based on Clear Containers. Since the Clear Containers project had already done the heavy lifting to run container workloads inside of a VM, it was rather straightforward to add support to encrypt those VMs using SEV. The required changes are summarized below: * Container kernel: - Add SEV support patches from the Linux kernel repo in [3]. - Force virtio to use the DMA API (and hence SWIOTLB) when adding/removing buffers to/from the virtio ring buffer. - SEV requires a memory copy in order to perform the encryption, so zero-copy solutions using DAX for the container initial user space will not work. + Build in a small initramfs to use as the guest kernel initial user space. + Include the updated container agent binary and supporting libs (~14MB total). * Container agent: - Update the agent not to use the pivot_root() method from the initramfs environment, and perform the pivot to the container workload filesystem manually instead. * Container runtime: - Add the new qemu command line options for starting an SEV guest. * Qemu-lite: - Add the SEV support patches from the qemu repo in [3].
Hi Jesse, Thanks for the proposal! SEV matches greatly with the Kata Container threat model and can be quite useful in a cloud environment. I have one question though -- how does it handle qemu vm clone? Cheers, Tao -- bergwolf@hyper.sh
-----Original Message-----
Hi all,
The virtualization instructions in the latest AMD EPYC server processors have been enhanced with a memory encryption feature that could provide
On Thu, Feb 22, 2018 at 6:06 AM, Larrew, Jesse <Jesse.Larrew@amd.com> wrote: projects like Kata Containers with unique security capabilities compared to their non-virtualized counterparts. We call this feature Secure Encrypted Virtualization (or SEV) and would be interested in collaborating with the Kata Container community to enable support for containers with encrypted memory.
In short, we've added an inline AES engine to our memory controller that
encrypts data written to system DRAM and decrypts data read from DRAM. The encryption keys are generated from a TRNG in the onboard AMD Secure Processor (SP) and programmed into the memory controller as needed in a manner that is never visible to software. Additionally, our virtualization instructions have been enhanced to be able to associate a VM ASID with a unique encryption key, so each VM (or container) can keep the contents of its memory confidential from the host and/or other tenant VMs/containers. The guest kernel can choose which pages to encrypt and which to share with the host by setting a bit in the guest page tables, which puts the guest in complete control of the visibility of their data in the cloud. More information can be found in our Memory Encryption whitepaper [1] and in the Architecture Programmer's Manual [2].
Linux kernel support for SEV has been merged into the 4.15 and upcoming
4.16 kernels. OVMF BIOS support has been merged as well. The qemu changes are still being upstreamed, but the patches are available for testing on github [3].
With the above support in place, we have developed a proof-of-concept
demo that is based on Clear Containers. Since the Clear Containers project had already done the heavy lifting to run container workloads inside of a VM, it was rather straightforward to add support to encrypt those VMs using SEV. The required changes are summarized below:
* Container kernel: - Add SEV support patches from the Linux kernel repo in [3]. - Force virtio to use the DMA API (and hence SWIOTLB) when
adding/removing buffers to/from the virtio ring buffer.
- SEV requires a memory copy in order to perform the encryption,
so zero-copy solutions using DAX for the container initial user space will not work.
+ Build in a small initramfs to use as the guest kernel initial user
space.
+ Include the updated container agent binary and supporting
libs (~14MB total).
* Container agent: - Update the agent not to use the pivot_root() method from the
initramfs environment, and perform the pivot to the container workload filesystem manually instead.
* Container runtime: - Add the new qemu command line options for starting an SEV
guest.
* Qemu-lite: - Add the SEV support patches from the qemu repo in [3].
Hi Jesse,
Thanks for the proposal! SEV matches greatly with the Kata Container threat model and can be quite useful in a cloud environment.
I have one question though -- how does it handle qemu vm clone?
Cheers, Tao
Hi Tao, I’m not familiar with qemu VM clone. Is there any documentation or source files I could browse to learn more? Sincerely, Jesse
Hi Larrew,
I’m not familiar with qemu VM clone. Is there any documentation or source files I could browse to learn more?
Sorry I meant to say live migration instead of clone since qemu clone is mostly used to mean guest image cloning. Here is some background I can find online: https://developers.redhat.com/blog/2015/03/24/live-migrating-qemu-kvm-virtua... Cheers, Tao -- bergwolf@hyper.sh
Ah! Yes, live migration of encrypted guests is supported by the hardware. Qemu support is still being actively developed though. The details of migrating encrypted VMs are discussed in our Secure Encrypted Virtualization API [1]. Basically, the AMD Secure Processor re-encrypts and integrity-protects the guest memory into discrete "packets" that can be sent to the destination machine. The transport/integrity keys used for migration are ephemeral keys negotiated with the receiving machine using a Diffie-Hellman exchange. Live migration is useful for VMs that need to be stateful. However, my impression of container use cases is that they are encouraged to be stateless. For stateless containers, it would most likely be quicker to simply throw away the container and start a new one on the destination machine. Do you have specific use cases that require the ability to migrate containers? [1] Secure Encrypted Virtualization API v0.16: https://support.amd.com/TechDocs/55766_SEV-KM%20API_Specification.pdf Sincerely, Jesse
-----Original Message----- From: Tao Peng [mailto:bergwolf@hyper.sh] Sent: Thursday, February 22, 2018 5:15 AM To: Larrew, Jesse <Jesse.Larrew@amd.com> Cc: kata-dev@lists.katacontainers.io; Hollingsworth, Brent <brent.hollingsworth@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Hi Larrew,
I’m not familiar with qemu VM clone. Is there any documentation or source files I could browse to learn more?
Sorry I meant to say live migration instead of clone since qemu clone is mostly used to mean guest image cloning. Here is some background I can find online: https://developers.redhat.com/blog/2015/03/24/live-migrating-qemu-kvm- virtual-machines/
Cheers, Tao -- bergwolf@hyper.sh
On Fri, Feb 23, 2018 at 12:32 AM, Larrew, Jesse <Jesse.Larrew@amd.com> wrote:
Ah! Yes, live migration of encrypted guests is supported by the hardware. Qemu support is still being actively developed though. The details of migrating encrypted VMs are discussed in our Secure Encrypted Virtualization API [1]. Basically, the AMD Secure Processor re-encrypts and integrity-protects the guest memory into discrete "packets" that can be sent to the destination machine. The transport/integrity keys used for migration are ephemeral keys negotiated with the receiving machine using a Diffie-Hellman exchange.
Live migration is useful for VMs that need to be stateful. However, my impression of container use cases is that they are encouraged to be stateless. For stateless containers, it would most likely be quicker to simply throw away the container and start a new one on the destination machine. Do you have specific use cases that require the ability to migrate containers?
Yes. We have an optimization [1] based on the qemu live migration feature which let us share the initial part of guest memory among guests on the same host. It's a quite useful feature for vm-based container workload because the kernel and initramfs are most likely the same for all guests on the host. [1]: https://github.com/hyperhq/qemu/commit/162b05b38ddb8505c209cf3c570d70c76427c... Cheers, Tao
[1] Secure Encrypted Virtualization API v0.16: https://support.amd.com/TechDocs/55766_SEV-KM%20API_Specification.pdf
Sincerely, Jesse
-----Original Message----- From: Tao Peng [mailto:bergwolf@hyper.sh] Sent: Thursday, February 22, 2018 5:15 AM To: Larrew, Jesse <Jesse.Larrew@amd.com> Cc: kata-dev@lists.katacontainers.io; Hollingsworth, Brent <brent.hollingsworth@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Hi Larrew,
I’m not familiar with qemu VM clone. Is there any documentation or source files I could browse to learn more?
Sorry I meant to say live migration instead of clone since qemu clone is mostly used to mean guest image cloning. Here is some background I can find online: https://developers.redhat.com/blog/2015/03/24/live-migrating-qemu-kvm- virtual-machines/
Cheers, Tao -- bergwolf@hyper.sh
-- bergwolf@hyper.sh
On Fri, Feb 23, 2018 at 5:55 AM, Tao Peng <bergwolf@hyper.sh> wrote:
Ah! Yes, live migration of encrypted guests is supported by the hardware. Qemu support is still being actively developed though. The
On Fri, Feb 23, 2018 at 12:32 AM, Larrew, Jesse <Jesse.Larrew@amd.com> wrote: details of migrating encrypted VMs are discussed in our Secure Encrypted Virtualization API [1]. Basically, the AMD Secure Processor re-encrypts and integrity-protects the guest memory into discrete "packets" that can be sent to the destination machine. The transport/integrity keys used for migration are ephemeral keys negotiated with the receiving machine using a Diffie-Hellman exchange.
Live migration is useful for VMs that need to be stateful. However, my
impression of container use cases is that they are encouraged to be stateless. For stateless containers, it would most likely be quicker to simply throw away the container and start a new one on the destination machine. Do you have specific use cases that require the ability to migrate containers?
Yes. We have an optimization [1] based on the qemu live migration feature which let us share the initial part of guest memory among guests on the same host. It's a quite useful feature for vm-based container workload because the kernel and initramfs are most likely the same for all guests on the host.
My two cents here: It's also quite useful at a higher level when you are dealing with stateful services, that are not heavily containerized in prod yet. First ones that pop in my head are redis/memchached. Then there's also big data stateful type of dbs that require a lot memory, say Cassandra, or Scylla.
[1]: https://github.com/hyperhq/qemu/commit/162b05b38ddb8505c209cf3c570d70 c76427c8a5
Cheers, Tao
[1] Secure Encrypted Virtualization API v0.16: https://support.amd.com/TechDocs/55766_SEV-KM%20API_Specification.pdf
Sincerely, Jesse
-----Original Message----- From: Tao Peng [mailto:bergwolf@hyper.sh] Sent: Thursday, February 22, 2018 5:15 AM To: Larrew, Jesse <Jesse.Larrew@amd.com> Cc: kata-dev@lists.katacontainers.io; Hollingsworth, Brent <brent.hollingsworth@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Hi Larrew,
I’m not familiar with qemu VM clone. Is there any documentation or source files I could browse to learn more?
Sorry I meant to say live migration instead of clone since qemu clone is mostly used to mean guest image cloning. Here is some background I can find online: https://developers.redhat.com/blog/2015/03/24/live-migrating-qemu-kvm- virtual-machines/
Cheers, Tao -- bergwolf@hyper.sh
-- bergwolf@hyper.sh
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
Why would a large memory footprint server like Redis benefit from sharing memory with other kernels running on the box? I would expect it’s memory requirements to dwarf the savings. Thanks, EJ On Fri, Feb 23, 2018 at 8:32 AM Ricardo Aravena <raravena80@gmail.com> wrote:
On Fri, Feb 23, 2018 at 5:55 AM, Tao Peng <bergwolf@hyper.sh> wrote:
Ah! Yes, live migration of encrypted guests is supported by the hardware. Qemu support is still being actively developed though. The
On Fri, Feb 23, 2018 at 12:32 AM, Larrew, Jesse <Jesse.Larrew@amd.com> wrote: details of migrating encrypted VMs are discussed in our Secure Encrypted Virtualization API [1]. Basically, the AMD Secure Processor re-encrypts and integrity-protects the guest memory into discrete "packets" that can be sent to the destination machine. The transport/integrity keys used for migration are ephemeral keys negotiated with the receiving machine using a Diffie-Hellman exchange.
Live migration is useful for VMs that need to be stateful. However, my
impression of container use cases is that they are encouraged to be stateless. For stateless containers, it would most likely be quicker to simply throw away the container and start a new one on the destination machine. Do you have specific use cases that require the ability to migrate containers?
Yes. We have an optimization [1] based on the qemu live migration feature which let us share the initial part of guest memory among guests on the same host. It's a quite useful feature for vm-based container workload because the kernel and initramfs are most likely the same for all guests on the host.
My two cents here:
It's also quite useful at a higher level when you are dealing with stateful services, that are not heavily containerized in prod yet. First ones that pop in my head are redis/memchached. Then there's also big data stateful type of dbs that require a lot memory, say Cassandra, or Scylla.
[1]: https://github.com/hyperhq/qemu/commit/162b05b38ddb8505c209cf3c570d70c76427c...
Cheers, Tao
[1] Secure Encrypted Virtualization API v0.16: https://support.amd.com/TechDocs/55766_SEV-KM%20API_Specification.pdf
Sincerely, Jesse
-----Original Message----- From: Tao Peng [mailto:bergwolf@hyper.sh] Sent: Thursday, February 22, 2018 5:15 AM To: Larrew, Jesse <Jesse.Larrew@amd.com> Cc: kata-dev@lists.katacontainers.io; Hollingsworth, Brent <brent.hollingsworth@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Hi Larrew,
I’m not familiar with qemu VM clone. Is there any documentation or source files I could browse to learn more?
Sorry I meant to say live migration instead of clone since qemu clone is mostly used to mean guest image cloning. Here is some background I can find online: https://developers.redhat.com/blog/2015/03/24/live-migrating-qemu-kvm- virtual-machines/
Cheers, Tao -- bergwolf@hyper.sh
-- bergwolf@hyper.sh
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
Sorry, wasn't specific enough, I didn't mean sharing memory on same physical box to improve performance by sharing bits. I meant live migration in general from one physical machine to another (aka VMotion), in where you could have a containerized Redis (in a VM) and move it to another machine with minimal downtime. A containerized Redis wouldn't benefit from another containerized instance of Redis on the same machine unless they were replicas (which would defeat the purpose of having replicas; you generally want them in different datacenters) . Hope it helps. Cheers, Ricardo On Fri, Feb 23, 2018 at 9:00 AM, EJ Campbell <ejc3@oath.com> wrote:
Why would a large memory footprint server like Redis benefit from sharing memory with other kernels running on the box? I would expect it’s memory requirements to dwarf the savings.
Thanks, EJ
On Fri, Feb 23, 2018 at 8:32 AM Ricardo Aravena <raravena80@gmail.com> wrote:
On Fri, Feb 23, 2018 at 5:55 AM, Tao Peng <bergwolf@hyper.sh> wrote:
Ah! Yes, live migration of encrypted guests is supported by the hardware. Qemu support is still being actively developed though. The
On Fri, Feb 23, 2018 at 12:32 AM, Larrew, Jesse <Jesse.Larrew@amd.com> wrote: details of migrating encrypted VMs are discussed in our Secure Encrypted Virtualization API [1]. Basically, the AMD Secure Processor re-encrypts and integrity-protects the guest memory into discrete "packets" that can be sent to the destination machine. The transport/integrity keys used for migration are ephemeral keys negotiated with the receiving machine using a Diffie-Hellman exchange.
Live migration is useful for VMs that need to be stateful. However, my
impression of container use cases is that they are encouraged to be stateless. For stateless containers, it would most likely be quicker to simply throw away the container and start a new one on the destination machine. Do you have specific use cases that require the ability to migrate containers?
Yes. We have an optimization [1] based on the qemu live migration feature which let us share the initial part of guest memory among guests on the same host. It's a quite useful feature for vm-based container workload because the kernel and initramfs are most likely the same for all guests on the host.
My two cents here:
It's also quite useful at a higher level when you are dealing with stateful services, that are not heavily containerized in prod yet. First ones that pop in my head are redis/memchached. Then there's also big data stateful type of dbs that require a lot memory, say Cassandra, or Scylla.
[1]: https://github.com/hyperhq/qemu/commit/ 162b05b38ddb8505c209cf3c570d70c76427c8a5
Cheers, Tao
[1] Secure Encrypted Virtualization API v0.16: https://support.amd.com/TechDocs/55766_SEV-KM%20API_Specification.pdf
Sincerely, Jesse
-----Original Message----- From: Tao Peng [mailto:bergwolf@hyper.sh] Sent: Thursday, February 22, 2018 5:15 AM To: Larrew, Jesse <Jesse.Larrew@amd.com> Cc: kata-dev@lists.katacontainers.io; Hollingsworth, Brent <brent.hollingsworth@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Hi Larrew,
I’m not familiar with qemu VM clone. Is there any documentation or source files I could browse to learn more?
Sorry I meant to say live migration instead of clone since qemu clone is mostly used to mean guest image cloning. Here is some background I can find online: https://developers.redhat.com/blog/2015/03/24/live- migrating-qemu-kvm- virtual-machines/
Cheers, Tao -- bergwolf@hyper.sh
-- bergwolf@hyper.sh
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
From: Ricardo Aravena [mailto:raravena80@gmail.com] Sent: Friday, February 23, 2018 10:32 AM To: Tao Peng <bergwolf@hyper.sh> Cc: Larrew, Jesse <Jesse.Larrew@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; Woller, Thomas <thomas.woller@amd.com>; Kaplan, David <David.Kaplan@amd.com>; kata-dev@lists.katacontainers.io Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) On Fri, Feb 23, 2018 at 5:55 AM, Tao Peng <bergwolf@hyper.sh<mailto:bergwolf@hyper.sh>> wrote: On Fri, Feb 23, 2018 at 12:32 AM, Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote:
Ah! Yes, live migration of encrypted guests is supported by the hardware. Qemu support is still being actively developed though. The details of migrating encrypted VMs are discussed in our Secure Encrypted Virtualization API [1]. Basically, the AMD Secure Processor re-encrypts and integrity-protects the guest memory into discrete "packets" that can be sent to the destination machine. The transport/integrity keys used for migration are ephemeral keys negotiated with the receiving machine using a Diffie-Hellman exchange.
Live migration is useful for VMs that need to be stateful. However, my impression of container use cases is that they are encouraged to be stateless. For stateless containers, it would most likely be quicker to simply throw away the container and start a new one on the destination machine. Do you have specific use cases that require the ability to migrate containers?
Yes. We have an optimization [1] based on the qemu live migration feature which let us share the initial part of guest memory among guests on the same host. It's a quite useful feature for vm-based container workload because the kernel and initramfs are most likely the same for all guests on the host. My two cents here: It's also quite useful at a higher level when you are dealing with stateful services, that are not heavily containerized in prod yet. First ones that pop in my head are redis/memchached. Then there's also big data stateful type of dbs that require a lot memory, say Cassandra, or Scylla. That’s good information. Thanks! I’m definitely trying to understand which workloads would benefit most from the container-in-a-VM approach, and which features/capabilities are most important. Are there any other features/tricks that an SEV implementation would need to preserve? Sincerely, Jesse
On Fri, Feb 23, 2018 at 1:09 PM, Larrew, Jesse <Jesse.Larrew@amd.com> wrote:
*From:* Ricardo Aravena [mailto:raravena80@gmail.com] *Sent:* Friday, February 23, 2018 10:32 AM *To:* Tao Peng <bergwolf@hyper.sh> *Cc:* Larrew, Jesse <Jesse.Larrew@amd.com>; Hollingsworth, Brent < brent.hollingsworth@amd.com>; Woller, Thomas <thomas.woller@amd.com>; Kaplan, David <David.Kaplan@amd.com>; kata-dev@lists.katacontainers.io *Subject:* Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
On Fri, Feb 23, 2018 at 5:55 AM, Tao Peng <bergwolf@hyper.sh> wrote:
Ah! Yes, live migration of encrypted guests is supported by the hardware. Qemu support is still being actively developed though. The
On Fri, Feb 23, 2018 at 12:32 AM, Larrew, Jesse <Jesse.Larrew@amd.com> wrote: details of migrating encrypted VMs are discussed in our Secure Encrypted Virtualization API [1]. Basically, the AMD Secure Processor re-encrypts and integrity-protects the guest memory into discrete "packets" that can be sent to the destination machine. The transport/integrity keys used for migration are ephemeral keys negotiated with the receiving machine using a Diffie-Hellman exchange.
Live migration is useful for VMs that need to be stateful. However, my
impression of container use cases is that they are encouraged to be stateless. For stateless containers, it would most likely be quicker to simply throw away the container and start a new one on the destination machine. Do you have specific use cases that require the ability to migrate containers?
Yes. We have an optimization [1] based on the qemu live migration feature which let us share the initial part of guest memory among guests on the same host. It's a quite useful feature for vm-based container workload because the kernel and initramfs are most likely the same for all guests on the host.
My two cents here:
It's also quite useful at a higher level when you are dealing with stateful services, that are not heavily containerized in prod yet. First ones that pop in my head are redis/memchached. Then there's also big data stateful type of dbs that require a lot memory, say Cassandra, or Scylla.
That’s good information. Thanks! I’m definitely trying to understand which workloads would benefit most from the container-in-a-VM approach, and which features/capabilities are most important.
Are there any other features/tricks that an SEV implementation would need to preserve?
You're welcome! How about live migration recovery for disaster recovery (DR) type of scenarios? If for some reason somebody or something pulls the plug on either the server migrating from or the server migrating to, in the middle of it all. It would be nice if the data wouldn't get corrupted. There's a data protection section in the API spec, not sure if that covers that. Also from the sounds of it, it looks like the the platform key would be different for each server. Overall SEV looks great IMO. Cheers, Ricardo
Sincerely,
Jesse
From: Tao Peng [mailto:bergwolf@hyper.sh] Sent: Friday, February 23, 2018 7:56 AM To: Larrew, Jesse <Jesse.Larrew@amd.com> Cc: kata-dev@lists.katacontainers.io; Hollingsworth, Brent <brent.hollingsworth@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Ah! Yes, live migration of encrypted guests is supported by the hardware. Qemu support is still being actively developed though. The details of migrating encrypted VMs are discussed in our Secure Encrypted Virtualization API [1]. Basically, the AMD Secure Processor re-encrypts and integrity-
On Fri, Feb 23, 2018 at 12:32 AM, Larrew, Jesse <Jesse.Larrew@amd.com> wrote: protects the guest memory into discrete "packets" that can be sent to the destination machine. The transport/integrity keys used for migration are ephemeral keys negotiated with the receiving machine using a Diffie- Hellman exchange.
Live migration is useful for VMs that need to be stateful. However, my
impression of container use cases is that they are encouraged to be stateless. For stateless containers, it would most likely be quicker to simply throw away the container and start a new one on the destination machine. Do you have specific use cases that require the ability to migrate containers?
Yes. We have an optimization [1] based on the qemu live migration feature which let us share the initial part of guest memory among guests on the same host. It's a quite useful feature for vm-based container workload because the kernel and initramfs are most likely the same for all guests on the host.
[1]: https://github.com/hyperhq/qemu/commit/162b05b38ddb8505c209cf3c570 d70c76427c8a5
Ah, I understand now. That’s a neat idea! SEV forces all code pages to be encrypted, so this technique wouldn’t work without modification. Since the guest kernel knows where all of its pages are and has access to a network stack, the kernel could theoretically establish a connection to a dummy guest and migrate/clone *itself* into the dummy.
Cheers, Tao
Sincerely, Jesse
Hi Jesse, Thanks for the detailed explanation, glad to see you got that working with Clear Containers. Per container/VM memory encryption is an exciting feature that we'll have to support as, as you said, it fits really well into the Kata Containers goals and architecture. A few comments/questions: On Wed, Feb 21, 2018 at 10:06:25PM +0000, Larrew, Jesse wrote:
Hi all,
The virtualization instructions in the latest AMD EPYC server processors have been enhanced with a memory encryption feature that could provide projects like Kata Containers with unique security capabilities compared to their non-virtualized counterparts. We call this feature Secure Encrypted Virtualization (or SEV) and would be interested in collaborating with the Kata Container community to enable support for containers with encrypted memory.
In short, we've added an inline AES engine to our memory controller that encrypts data written to system DRAM and decrypts data read from DRAM. The encryption keys are generated from a TRNG in the onboard AMD Secure Processor (SP) and programmed into the memory controller as needed in a manner that is never visible to software. Additionally, our virtualization instructions have been enhanced to be able to associate a VM ASID with a unique encryption key, so each VM (or container) can keep the contents of its memory confidential from the host and/or other tenant VMs/containers. The guest kernel can choose which pages to encrypt and which to share with the host by setting a bit in the guest page tables, which puts the guest in complete control of the visibility of their data in the cloud. More information can be found in our Memory Encryption whitepaper [1] and in the Architecture Programmer's Manual [2].
Linux kernel support for SEV has been merged into the 4.15 and upcoming 4.16 kernels. OVMF BIOS support has been merged as well. The qemu changes are still being upstreamed, but the patches are available for testing on github [3].
With the above support in place, we have developed a proof-of-concept demo that is based on Clear Containers. Since the Clear Containers project had already done the heavy lifting to run container workloads inside of a VM, it was rather straightforward to add support to encrypt those VMs using SEV. The required changes are summarized below: * Container kernel: - Add SEV support patches from the Linux kernel repo in [3]. - Force virtio to use the DMA API (and hence SWIOTLB) when adding/removing buffers to/from the virtio ring buffer.
Ah, now I understand where the iommu question on the cc-devel mailing list was coming from :) Would you mind explaining why you need SWIOTLB when SEV is enabled? Also, I assume you need to force all virtio devices to do DMA, right? Did you just hack vring_use_dma_api() at the moment?
- SEV requires a memory copy in order to perform the encryption, so zero-copy solutions using DAX for the container initial user space will not work. + Build in a small initramfs to use as the guest kernel initial user space. + Include the updated container agent binary and supporting libs (~14MB total).
So not using nvdimm from the QEMU command line and switching to a virtio block would have been sufficient here, right?
As a check, dumping the contents of a page from the qemu heap reveals plaintext data:
amd@pecanporter:~/src/git$ sudo dd if=/proc/$(pgrep qemu)/mem bs=4096 count=1 skip=23058854513 | xxd | tail dd: /proc/38572/mem: cannot skip to specified offset 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.8437e-05 s, 46.3 MB/s 00000f60: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000f70: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000f80: 0000 0000 0000 0000 7100 0000 0000 0000 ........q....... 00000f90: 2f72 756e 2f76 6972 7463 6f6e 7461 696e /run/virtcontain 00000fa0: 6572 732f 706f 6473 2f33 3565 3233 6565 ers/pods/35e23ee 00000fb0: 3330 6466 6237 3266 3135 3730 6265 3432 30dfb72f1570be42 00000fc0: 6665 3165 6331 3366 3331 3332 6138 6133 fe1ec13f3132a8a3 00000fd0: 6463 3336 6463 3131 6235 6365 3837 6236 dc36dc11b5ce87b6 00000fe0: 3437 3930 3736 6339 612f 636f 6e73 6f6c 479076c9a/consol 00000ff0: 652e 736f 636b 0000 0104 0000 0000 0000 e.sock..........
However, any attempt to read the container memory from the host produces only ciphertext:
amd@pecanporter:~/src/git$ sudo dd if=/proc/$(pgrep qemu)/mem bs=4096 count=1 skip=34165702144 | xxd | head dd: /proc/38572/mem: cannot skip to specified offset 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.9039e-05 s, 46.0 MB/s 00000000: e9b8 e14d c063 ee18 fd85 5ecc 4d1f c1a2 ...M.c....^.M... 00000010: d681 cdf2 259b a97e c43b 5cde bf9e 695b ....%..~.;\...i[ 00000020: db3c 778b 8e77 89f4 f795 e5a6 9ebb 765b .<w..w........v[ 00000030: 0905 e1d3 c7ec 6f2b bada ed15 b2e0 db7f ......o+........ 00000040: d5e9 6d15 cf28 0ca1 4a45 3b9a 1779 e3ff ..m..(..JE;..y.. 00000050: 9ee0 b562 2311 6e5a e972 4c06 3f6a 6ebf ...b#.nZ.rL.?jn. 00000060: 909a 88ea 737a 6226 5d87 8968 b31b d096 ....szb&]..h.... 00000070: 9360 cbb0 4f34 d811 89a7 048f 01e8 d19e .`..O4.......... 00000080: 5429 995a 4de0 6fba 3360 8bb4 a2dc 17e4 T).ZM.o.3`...... 00000090: 80f5 6657 9fd7 0347 e78d 4d13 6b6c c649 ..fW...G..M.kl.I
Sweet!
Our threat model is to allow container workloads to reduce their risk exposure to security vulnerabilities in the hosting environment, which seems to overlap nicely with the threat model of Kata Containers. Is this a feature that the Kata community would find useful? If so, we would be very interested to work with the community to enable SEV memory encryption for Kata Containers. Any and all feedback is welcome!
So I guess we'll gather the kernel, qemu and firmware patches through upstream at some point. Or we can backport them once they're in if we don't want to move to the latest versions for those. I guess Kata Containers main task to support this would be at the hypervisor level, specifically at being able to pass the right options to QEMU. In my mind we should make our qemu hypervisor implementation detect SEV/MK-TME support dynamically and set the right qemu options (+ memory-encryption, - nvdimm) by default when the host CPU supports it. I believe we should also provide an opt-out runtime option for those who don't want to pay the performance penalty of memory encryption. Cheers, Samuel.
The required changes are summarized below:
* Container kernel: - Add SEV support patches from the Linux kernel repo in [3]. - Force virtio to use the DMA API (and hence SWIOTLB) when adding/removing buffers to/from the virtio ring buffer.
Ah, now I understand where the iommu question on the cc-devel mailing list was coming from :) Would you mind explaining why you need SWIOTLB when SEV is enabled? Also, I assume you need to force all virtio devices to do DMA, right? Did you just hack vring_use_dma_api() at the moment?
Ha ha! Yup, now you have the whole story. :) I never got around to thanking your for your reply in that thread. It was really helpful. Thanks! On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly: amd@pecanporter:~/src/git/AMDSEV/src/kvm$ git diff diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index eb30f3e09a47..1bba0a6c1668 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -25,6 +25,7 @@ #include <linux/hrtimer.h> #include <linux/kmemleak.h> #include <linux/dma-mapping.h> +#include <linux/mem_encrypt.h> #include <xen/xen.h> #ifdef DEBUG @@ -147,6 +148,9 @@ static bool vring_use_dma_api(struct virtio_device *vdev) if (!virtio_has_iommu_quirk(vdev)) return true; + if (mem_encrypt_active()) + return true; + /* Otherwise, we are left to guess. */ /* * In theory, it's possible to have a buggy QEMU-supposed
- SEV requires a memory copy in order to perform the
encryption, so zero-copy solutions using DAX for the container initial user space will not work.
+ Build in a small initramfs to use as the guest kernel
initial user space.
+ Include the updated container agent binary and
supporting libs (~14MB total). So not using nvdimm from the QEMU command line and switching to a virtio block would have been sufficient here, right?
Yes, virtio-blk should work for this as well.
As a check, dumping the contents of a page from the qemu heap reveals plaintext data:
amd@pecanporter:~/src/git$ sudo dd if=/proc/$(pgrep qemu)/mem bs=4096 count=1 skip=23058854513 | xxd | tail dd: /proc/38572/mem: cannot skip to specified offset 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.8437e-05 s, 46.3 MB/s 00000f60: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000f70: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000f80: 0000 0000 0000 0000 7100 0000 0000 0000 ........q....... 00000f90: 2f72 756e 2f76 6972 7463 6f6e 7461 696e /run/virtcontain 00000fa0: 6572 732f 706f 6473 2f33 3565 3233 6565 ers/pods/35e23ee 00000fb0: 3330 6466 6237 3266 3135 3730 6265 3432 30dfb72f1570be42 00000fc0: 6665 3165 6331 3366 3331 3332 6138 6133 fe1ec13f3132a8a3 00000fd0: 6463 3336 6463 3131 6235 6365 3837 6236 dc36dc11b5ce87b6 00000fe0: 3437 3930 3736 6339 612f 636f 6e73 6f6c 479076c9a/consol 00000ff0: 652e 736f 636b 0000 0104 0000 0000 0000 e.sock..........
However, any attempt to read the container memory from the host produces only ciphertext:
amd@pecanporter:~/src/git$ sudo dd if=/proc/$(pgrep qemu)/mem bs=4096 count=1 skip=34165702144 | xxd | head dd: /proc/38572/mem: cannot skip to specified offset 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.9039e-05 s, 46.0 MB/s 00000000: e9b8 e14d c063 ee18 fd85 5ecc 4d1f c1a2 ...M.c....^.M... 00000010: d681 cdf2 259b a97e c43b 5cde bf9e 695b ....%..~.;\...i[ 00000020: db3c 778b 8e77 89f4 f795 e5a6 9ebb 765b .<w..w........v[ 00000030: 0905 e1d3 c7ec 6f2b bada ed15 b2e0 db7f ......o+........ 00000040: d5e9 6d15 cf28 0ca1 4a45 3b9a 1779 e3ff ..m..(..JE;..y.. 00000050: 9ee0 b562 2311 6e5a e972 4c06 3f6a 6ebf ...b#.nZ.rL.?jn. 00000060: 909a 88ea 737a 6226 5d87 8968 b31b d096 ....szb&]..h.... 00000070: 9360 cbb0 4f34 d811 89a7 048f 01e8 d19e .`..O4.......... 00000080: 5429 995a 4de0 6fba 3360 8bb4 a2dc 17e4 T).ZM.o.3`...... 00000090: 80f5 6657 9fd7 0347 e78d 4d13 6b6c c649 ..fW...G..M.kl.I
Sweet!
Thanks! I get a kick out of this too. :D
Our threat model is to allow container workloads to reduce their risk exposure to security vulnerabilities in the hosting environment, which seems to overlap nicely with the threat model of Kata Containers. Is this a feature that the Kata community would find useful? If so, we would be very interested to work with the community to enable SEV memory encryption for Kata Containers. Any and all feedback is welcome!
So I guess we'll gather the kernel, qemu and firmware patches through upstream at some point. Or we can backport them once they're in if we don't want to move to the latest versions for those. I guess Kata Containers main task to support this would be at the hypervisor level, specifically at being able to pass the right options to QEMU. In my mind we should make our qemu hypervisor implementation detect SEV/MK-TME support dynamically and set the right qemu options (+ memory-encryption, - nvdimm) by default when the host CPU supports it. I believe we should also provide an opt-out runtime option for those who don't want to pay the performance penalty of memory encryption.
Yes, the largest changes were to teach virtcontainers/govmm how to enable memory encryption in qemu. The Linux kernel currently has a boot parameter to disable memory encryption support (mem_encrypt=off), which could be added to the guest kernel "append" option in the config file, but that won't prevent qemu from creating the (unused) memory encryption machine objects. I agree that a proper "chicken bit" option in the config file would be appropriate.
Cheers, Samuel.
Sincerely, Jesse
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work? Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9 -manohar
From: Castelino, Manohar R [mailto:manohar.r.castelino@intel.com] Sent: Friday, February 23, 2018 3:53 PM To: Larrew, Jesse <Jesse.Larrew@amd.com>; Samuel Ortiz <sameo@linux.intel.com> Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com>; Woller, Thomas <thomas.woller@amd.com>; Kaplan, David <David.Kaplan@amd.com>; kata- dev@lists.katacontainers.io Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work?
Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory
For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9
-manohar
Hi Manohar, SEV also requires the guest memory to be pre-allocated and pinned [1], so that's not a problem. As long as the PF drivers in the guest are using the dma apis, everything should continue to work. Similarly, the reverse ballooning patches should also work with SEV. In fact, I would argue that SEV compliments this feature by ensuring that physical page contents aren't exposed to the host when the guest uses MADV_FREE. I've CC'ed our KVM expert, Brijesh Singh, just in case he sees something that I missed. Sincerely, Jesse [1] In order to ensure that memory blocks with identical data will encrypt to different ciphertext, SEV mixes the physical address into the encryption algorithm. As a result, if a page of memory is moved to a different physical address, it will not decrypt properly. This also defeats block-move attacks on the guest memory, but it also requires all guest memory to be pinned.
On Fri, Feb 23, 2018 at 4:42 PM Larrew, Jesse <Jesse.Larrew@amd.com> wrote:
From: Castelino, Manohar R [mailto:manohar.r.castelino@intel.com] Sent: Friday, February 23, 2018 3:53 PM To: Larrew, Jesse <Jesse.Larrew@amd.com>; Samuel Ortiz <sameo@linux.intel.com> Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com>; Woller, Thomas <thomas.woller@amd.com>; Kaplan, David <David.Kaplan@amd.com>; kata- dev@lists.katacontainers.io Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work?
Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory
For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9
-manohar
Hi Manohar,
SEV also requires the guest memory to be pre-allocated and pinned [1], so that's not a problem. As long as the PF drivers in the guest are using the dma apis, everything should continue to work.
That's surprising -- I can see where reclaim would be challenging without something like a balloon, but why must the be initially backed? What happens if you leave a page unbacked and attempt to lazily back it on an EPT fault?
Similarly, the reverse ballooning patches should also work with SEV. In fact, I would argue that SEV compliments this feature by ensuring that physical page contents aren't exposed to the host when the guest uses MADV_FREE. I've CC'ed our KVM expert, Brijesh Singh, just in case he sees something that I missed.
Sincerely, Jesse
[1] In order to ensure that memory blocks with identical data will encrypt to different ciphertext, SEV mixes the physical address into the encryption algorithm. As a result, if a page of memory is moved to a different physical address, it will not decrypt properly. This also defeats block-move attacks on the guest memory, but it also requires all guest memory to be pinned. _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
From: Jon Olson [mailto:jonolson@google.com] Sent: Friday, February 23, 2018 6:58 PM To: Larrew, Jesse <Jesse.Larrew@amd.com> Cc: Castelino, Manohar R <manohar.r.castelino@intel.com>; Samuel Ortiz <sameo@linux.intel.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) On Fri, Feb 23, 2018 at 4:42 PM Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote:
From: Castelino, Manohar R [mailto:manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>] Sent: Friday, February 23, 2018 3:53 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>> Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; kata- dev@lists.katacontainers.io<mailto:dev@lists.katacontainers.io> Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work?
Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory
For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9
-manohar
Hi Manohar, SEV also requires the guest memory to be pre-allocated and pinned [1], so that's not a problem. As long as the PF drivers in the guest are using the dma apis, everything should continue to work. That's surprising -- I can see where reclaim would be challenging without something like a balloon, but why must the be initially backed? What happens if you leave a page unbacked and attempt to lazily back it on an EPT fault? [JDL] You’re right Jon. I misspoke above. The memory only needs to be pinned; the backing pages can be faulted in on demand. Sorry for the confusion. Similarly, the reverse ballooning patches should also work with SEV. In fact, I would argue that SEV compliments this feature by ensuring that physical page contents aren't exposed to the host when the guest uses MADV_FREE. I've CC'ed our KVM expert, Brijesh Singh, just in case he sees something that I missed. Sincerely, Jesse [1] In order to ensure that memory blocks with identical data will encrypt to different ciphertext, SEV mixes the physical address into the encryption algorithm. As a result, if a page of memory is moved to a different physical address, it will not decrypt properly. This also defeats block-move attacks on the guest memory, but it also requires all guest memory to be pinned.
Jesse, I wanted to follow up here… I think this is a pretty exciting feature and I wanted to see what next steps are. Is this something that you’re planning to or can start contributing to the Kata project? Thanks Eric From: "Larrew, Jesse" <Jesse.Larrew@amd.com> Date: Friday, February 23, 2018 at 7:11 PM To: Jon Olson <jonolson@google.com> Cc: "Singh, Brijesh" <brijesh.singh@amd.com>, "Kaplan, David" <David.Kaplan@amd.com>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com>, "kata-dev@lists.katacontainers.io" <kata-dev@lists.katacontainers.io>, "Woller, Thomas" <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) From: Jon Olson [mailto:jonolson@google.com] Sent: Friday, February 23, 2018 6:58 PM To: Larrew, Jesse <Jesse.Larrew@amd.com> Cc: Castelino, Manohar R <manohar.r.castelino@intel.com>; Samuel Ortiz <sameo@linux.intel.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) On Fri, Feb 23, 2018 at 4:42 PM Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote:
From: Castelino, Manohar R [mailto:manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>] Sent: Friday, February 23, 2018 3:53 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>> Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; kata- dev@lists.katacontainers.io<mailto:dev@lists.katacontainers.io> Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work?
Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory
For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9
-manohar
Hi Manohar, SEV also requires the guest memory to be pre-allocated and pinned [1], so that's not a problem. As long as the PF drivers in the guest are using the dma apis, everything should continue to work. That's surprising -- I can see where reclaim would be challenging without something like a balloon, but why must the be initially backed? What happens if you leave a page unbacked and attempt to lazily back it on an EPT fault? [JDL] You’re right Jon. I misspoke above. The memory only needs to be pinned; the backing pages can be faulted in on demand. Sorry for the confusion. Similarly, the reverse ballooning patches should also work with SEV. In fact, I would argue that SEV compliments this feature by ensuring that physical page contents aren't exposed to the host when the guest uses MADV_FREE. I've CC'ed our KVM expert, Brijesh Singh, just in case he sees something that I missed. Sincerely, Jesse [1] In order to ensure that memory blocks with identical data will encrypt to different ciphertext, SEV mixes the physical address into the encryption algorithm. As a result, if a page of memory is moved to a different physical address, it will not decrypt properly. This also defeats block-move attacks on the guest memory, but it also requires all guest memory to be pinned.
Hi Eric, I’m seeking internal approval to contribute. Do you have a deadline for a decision? Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Friday, April 6, 2018 2:08 PM To: Larrew, Jesse <Jesse.Larrew@amd.com>; Jon Olson <jonolson@google.com> Cc: Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Jesse, I wanted to follow up here… I think this is a pretty exciting feature and I wanted to see what next steps are. Is this something that you’re planning to or can start contributing to the Kata project? Thanks Eric From: "Larrew, Jesse" <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Date: Friday, February 23, 2018 at 7:11 PM To: Jon Olson <jonolson@google.com<mailto:jonolson@google.com>> Cc: "Singh, Brijesh" <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>, "Kaplan, David" <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>, "kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>" <kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>>, "Woller, Thomas" <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) From: Jon Olson [mailto:jonolson@google.com] Sent: Friday, February 23, 2018 6:58 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Cc: Castelino, Manohar R <manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) On Fri, Feb 23, 2018 at 4:42 PM Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote:
From: Castelino, Manohar R [mailto:manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>] Sent: Friday, February 23, 2018 3:53 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>> Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; kata- dev@lists.katacontainers.io<mailto:dev@lists.katacontainers.io> Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work?
Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory
For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9
-manohar
Hi Manohar, SEV also requires the guest memory to be pre-allocated and pinned [1], so that's not a problem. As long as the PF drivers in the guest are using the dma apis, everything should continue to work. That's surprising -- I can see where reclaim would be challenging without something like a balloon, but why must the be initially backed? What happens if you leave a page unbacked and attempt to lazily back it on an EPT fault? [JDL] You’re right Jon. I misspoke above. The memory only needs to be pinned; the backing pages can be faulted in on demand. Sorry for the confusion. Similarly, the reverse ballooning patches should also work with SEV. In fact, I would argue that SEV compliments this feature by ensuring that physical page contents aren't exposed to the host when the guest uses MADV_FREE. I've CC'ed our KVM expert, Brijesh Singh, just in case he sees something that I missed. Sincerely, Jesse [1] In order to ensure that memory blocks with identical data will encrypt to different ciphertext, SEV mixes the physical address into the encryption algorithm. As a result, if a page of memory is moved to a different physical address, it will not decrypt properly. This also defeats block-move attacks on the guest memory, but it also requires all guest memory to be pinned.
There isn’t really a deadline. While we are still discussing release cadence for Kata, this seems like a nice feature to get in, perhaps after our initial 1.0 release (targeting ~June 1). Eric On Apr 7, 2018, at 6:22 PM, Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote: Hi Eric, I’m seeking internal approval to contribute. Do you have a deadline for a decision? Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Friday, April 6, 2018 2:08 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Jon Olson <jonolson@google.com<mailto:jonolson@google.com>> Cc: Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Jesse, I wanted to follow up here… I think this is a pretty exciting feature and I wanted to see what next steps are. Is this something that you’re planning to or can start contributing to the Kata project? Thanks Eric From: "Larrew, Jesse" <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Date: Friday, February 23, 2018 at 7:11 PM To: Jon Olson <jonolson@google.com<mailto:jonolson@google.com>> Cc: "Singh, Brijesh" <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>, "Kaplan, David" <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>, "kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>" <kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>>, "Woller, Thomas" <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) From: Jon Olson [mailto:jonolson@google.com] Sent: Friday, February 23, 2018 6:58 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Cc: Castelino, Manohar R <manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) On Fri, Feb 23, 2018 at 4:42 PM Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote:
From: Castelino, Manohar R [mailto:manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>] Sent: Friday, February 23, 2018 3:53 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>> Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; kata- dev@lists.katacontainers.io<mailto:dev@lists.katacontainers.io> Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work?
Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory
For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9
-manohar
Hi Manohar, SEV also requires the guest memory to be pre-allocated and pinned [1], so that's not a problem. As long as the PF drivers in the guest are using the dma apis, everything should continue to work. That's surprising -- I can see where reclaim would be challenging without something like a balloon, but why must the be initially backed? What happens if you leave a page unbacked and attempt to lazily back it on an EPT fault? [JDL] You’re right Jon. I misspoke above. The memory only needs to be pinned; the backing pages can be faulted in on demand. Sorry for the confusion. Similarly, the reverse ballooning patches should also work with SEV. In fact, I would argue that SEV compliments this feature by ensuring that physical page contents aren't exposed to the host when the guest uses MADV_FREE. I've CC'ed our KVM expert, Brijesh Singh, just in case he sees something that I missed. Sincerely, Jesse [1] In order to ensure that memory blocks with identical data will encrypt to different ciphertext, SEV mixes the physical address into the encryption algorithm. As a result, if a page of memory is moved to a different physical address, it will not decrypt properly. This also defeats block-move attacks on the guest memory, but it also requires all guest memory to be pinned.
Hi Eric, I got SEV working with the latest 0.0.1 kata runtime. Is there still a chance of getting this in before the 1.0 release on the 22nd? Or are we looking at 1.1.0 at this point? Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Saturday, April 7, 2018 9:18 PM To: Larrew, Jesse <Jesse.Larrew@amd.com> Cc: Jon Olson <jonolson@google.com>; Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) There isn't really a deadline. While we are still discussing release cadence for Kata, this seems like a nice feature to get in, perhaps after our initial 1.0 release (targeting ~June 1). Eric On Apr 7, 2018, at 6:22 PM, Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote: Hi Eric, I'm seeking internal approval to contribute. Do you have a deadline for a decision? Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Friday, April 6, 2018 2:08 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Jon Olson <jonolson@google.com<mailto:jonolson@google.com>> Cc: Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Jesse, I wanted to follow up here... I think this is a pretty exciting feature and I wanted to see what next steps are. Is this something that you're planning to or can start contributing to the Kata project? Thanks Eric From: "Larrew, Jesse" <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Date: Friday, February 23, 2018 at 7:11 PM To: Jon Olson <jonolson@google.com<mailto:jonolson@google.com>> Cc: "Singh, Brijesh" <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>, "Kaplan, David" <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>, "kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>" <kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>>, "Woller, Thomas" <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) From: Jon Olson [mailto:jonolson@google.com] Sent: Friday, February 23, 2018 6:58 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Cc: Castelino, Manohar R <manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) On Fri, Feb 23, 2018 at 4:42 PM Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote:
From: Castelino, Manohar R [mailto:manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>] Sent: Friday, February 23, 2018 3:53 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>> Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; kata- dev@lists.katacontainers.io<mailto:dev@lists.katacontainers.io> Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work?
Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory
For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9
-manohar
Hi Manohar, SEV also requires the guest memory to be pre-allocated and pinned [1], so that's not a problem. As long as the PF drivers in the guest are using the dma apis, everything should continue to work. That's surprising -- I can see where reclaim would be challenging without something like a balloon, but why must the be initially backed? What happens if you leave a page unbacked and attempt to lazily back it on an EPT fault? [JDL] You're right Jon. I misspoke above. The memory only needs to be pinned; the backing pages can be faulted in on demand. Sorry for the confusion. Similarly, the reverse ballooning patches should also work with SEV. In fact, I would argue that SEV compliments this feature by ensuring that physical page contents aren't exposed to the host when the guest uses MADV_FREE. I've CC'ed our KVM expert, Brijesh Singh, just in case he sees something that I missed. Sincerely, Jesse [1] In order to ensure that memory blocks with identical data will encrypt to different ciphertext, SEV mixes the physical address into the encryption algorithm. As a result, if a page of memory is moved to a different physical address, it will not decrypt properly. This also defeats block-move attacks on the guest memory, but it also requires all guest memory to be pinned.
Sweet, thanks Jesse. Send a PR and we can start the review process and see (my apologies if this was already done, I have too many repos to look at and mail filters to keep the noise down). We are acting on time based releases, not feature based, so so long as it passes review process and doesn’t cause issue with our CI, I don’t think there’d be an issue getting it in. -Eric From: "Larrew, Jesse" <Jesse.Larrew@amd.com> Date: Tuesday, May 1, 2018 at 10:28 AM To: Eric Ernst <eric.ernst@intel.com> Cc: Jon Olson <jonolson@google.com>, "Singh, Brijesh" <brijesh.singh@amd.com>, "Kaplan, David" <David.Kaplan@amd.com>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com>, "kata-dev@lists.katacontainers.io" <kata-dev@lists.katacontainers.io>, "Woller, Thomas" <thomas.woller@amd.com> Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Hi Eric, I got SEV working with the latest 0.0.1 kata runtime. Is there still a chance of getting this in before the 1.0 release on the 22nd? Or are we looking at 1.1.0 at this point? Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Saturday, April 7, 2018 9:18 PM To: Larrew, Jesse <Jesse.Larrew@amd.com> Cc: Jon Olson <jonolson@google.com>; Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) There isn’t really a deadline. While we are still discussing release cadence for Kata, this seems like a nice feature to get in, perhaps after our initial 1.0 release (targeting ~June 1). Eric On Apr 7, 2018, at 6:22 PM, Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote: Hi Eric, I’m seeking internal approval to contribute. Do you have a deadline for a decision? Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Friday, April 6, 2018 2:08 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Jon Olson <jonolson@google.com<mailto:jonolson@google.com>> Cc: Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Jesse, I wanted to follow up here… I think this is a pretty exciting feature and I wanted to see what next steps are. Is this something that you’re planning to or can start contributing to the Kata project? Thanks Eric From: "Larrew, Jesse" <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Date: Friday, February 23, 2018 at 7:11 PM To: Jon Olson <jonolson@google.com<mailto:jonolson@google.com>> Cc: "Singh, Brijesh" <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>, "Kaplan, David" <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>, "kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>" <kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>>, "Woller, Thomas" <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) From: Jon Olson [mailto:jonolson@google.com] Sent: Friday, February 23, 2018 6:58 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Cc: Castelino, Manohar R <manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) On Fri, Feb 23, 2018 at 4:42 PM Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote:
From: Castelino, Manohar R [mailto:manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>] Sent: Friday, February 23, 2018 3:53 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>> Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; kata- dev@lists.katacontainers.io<mailto:dev@lists.katacontainers.io> Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work?
Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory
For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9
-manohar
Hi Manohar, SEV also requires the guest memory to be pre-allocated and pinned [1], so that's not a problem. As long as the PF drivers in the guest are using the dma apis, everything should continue to work. That's surprising -- I can see where reclaim would be challenging without something like a balloon, but why must the be initially backed? What happens if you leave a page unbacked and attempt to lazily back it on an EPT fault? [JDL] You’re right Jon. I misspoke above. The memory only needs to be pinned; the backing pages can be faulted in on demand. Sorry for the confusion. Similarly, the reverse ballooning patches should also work with SEV. In fact, I would argue that SEV compliments this feature by ensuring that physical page contents aren't exposed to the host when the guest uses MADV_FREE. I've CC'ed our KVM expert, Brijesh Singh, just in case he sees something that I missed. Sincerely, Jesse [1] In order to ensure that memory blocks with identical data will encrypt to different ciphertext, SEV mixes the physical address into the encryption algorithm. As a result, if a page of memory is moved to a different physical address, it will not decrypt properly. This also defeats block-move attacks on the guest memory, but it also requires all guest memory to be pinned.
Hey Jesse, I had the chance to meet Brent @ Red Hat Summit yesterday, and this was a good reminder to reach out regarding SEV support in Kata. First, as expressed to Brent, this is a sweet use case. Second, one of the things that has come to mind for me is how we can verify. Our CI is running on a mix of baremetal machines for metrics and machines in the cloud (via Azure) for functional testing. For other architectures, and including this feature, it’d be best to have this exercised in CI. Basically, I’d want to make sure we can replicate the CI on a single AMD Epyc (or other SEV enabled system) which could help gate our CI process. We have the test setup designed to be easily reproduced, and can work together on getting this setup, assuming we find a machine which this can run on. With this feature enabled in our CI, we’d be able to guarantee that it continues to work (and if it fails, it should be an easy fix). Thanks, Eric From: "Larrew, Jesse" <Jesse.Larrew@amd.com> Date: Tuesday, May 1, 2018 at 10:28 AM To: Eric Ernst <eric.ernst@intel.com> Cc: Jon Olson <jonolson@google.com>, "Singh, Brijesh" <brijesh.singh@amd.com>, "Kaplan, David" <David.Kaplan@amd.com>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com>, "kata-dev@lists.katacontainers.io" <kata-dev@lists.katacontainers.io>, "Woller, Thomas" <thomas.woller@amd.com> Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Hi Eric, I got SEV working with the latest 0.0.1 kata runtime. Is there still a chance of getting this in before the 1.0 release on the 22nd? Or are we looking at 1.1.0 at this point? Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Saturday, April 7, 2018 9:18 PM To: Larrew, Jesse <Jesse.Larrew@amd.com> Cc: Jon Olson <jonolson@google.com>; Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) There isn’t really a deadline. While we are still discussing release cadence for Kata, this seems like a nice feature to get in, perhaps after our initial 1.0 release (targeting ~June 1). Eric On Apr 7, 2018, at 6:22 PM, Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote: Hi Eric, I’m seeking internal approval to contribute. Do you have a deadline for a decision? Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Friday, April 6, 2018 2:08 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Jon Olson <jonolson@google.com<mailto:jonolson@google.com>> Cc: Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Jesse, I wanted to follow up here… I think this is a pretty exciting feature and I wanted to see what next steps are. Is this something that you’re planning to or can start contributing to the Kata project? Thanks Eric From: "Larrew, Jesse" <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Date: Friday, February 23, 2018 at 7:11 PM To: Jon Olson <jonolson@google.com<mailto:jonolson@google.com>> Cc: "Singh, Brijesh" <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>, "Kaplan, David" <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>, "kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>" <kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>>, "Woller, Thomas" <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) From: Jon Olson [mailto:jonolson@google.com] Sent: Friday, February 23, 2018 6:58 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Cc: Castelino, Manohar R <manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) On Fri, Feb 23, 2018 at 4:42 PM Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote:
From: Castelino, Manohar R [mailto:manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>] Sent: Friday, February 23, 2018 3:53 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>> Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; kata- dev@lists.katacontainers.io<mailto:dev@lists.katacontainers.io> Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work?
Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory
For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9
-manohar
Hi Manohar, SEV also requires the guest memory to be pre-allocated and pinned [1], so that's not a problem. As long as the PF drivers in the guest are using the dma apis, everything should continue to work. That's surprising -- I can see where reclaim would be challenging without something like a balloon, but why must the be initially backed? What happens if you leave a page unbacked and attempt to lazily back it on an EPT fault? [JDL] You’re right Jon. I misspoke above. The memory only needs to be pinned; the backing pages can be faulted in on demand. Sorry for the confusion. Similarly, the reverse ballooning patches should also work with SEV. In fact, I would argue that SEV compliments this feature by ensuring that physical page contents aren't exposed to the host when the guest uses MADV_FREE. I've CC'ed our KVM expert, Brijesh Singh, just in case he sees something that I missed. Sincerely, Jesse [1] In order to ensure that memory blocks with identical data will encrypt to different ciphertext, SEV mixes the physical address into the encryption algorithm. As a result, if a page of memory is moved to a different physical address, it will not decrypt properly. This also defeats block-move attacks on the guest memory, but it also requires all guest memory to be pinned.
Hi Eric, We’re still working on setting up the CI server, but we’ve decided to make the SEV patches available on our github repo so folks can kick the tires, so to speak. The SEV patches require a new dependency in the vendor tree and updates a few others: 1. intel-go/cpuid (new dependency), 2. intel/govmm, 3. virtcontainers To submit a PR to kata-runtime, should I first fork each of the above projects and submit PRs against them, then point to those PRs when I submit to kata? This is the first Go project that I’ve contributed to, so I’m not sure what the protocol is. Any tips would be helpful. Thanks! Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Wednesday, May 9, 2018 3:45 PM To: Larrew, Jesse <Jesse.Larrew@amd.com> Cc: Jon Olson <jonolson@google.com>; Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Woller, Thomas <thomas.woller@amd.com> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Hey Jesse, I had the chance to meet Brent @ Red Hat Summit yesterday, and this was a good reminder to reach out regarding SEV support in Kata. First, as expressed to Brent, this is a sweet use case. Second, one of the things that has come to mind for me is how we can verify. Our CI is running on a mix of baremetal machines for metrics and machines in the cloud (via Azure) for functional testing. For other architectures, and including this feature, it’d be best to have this exercised in CI. Basically, I’d want to make sure we can replicate the CI on a single AMD Epyc (or other SEV enabled system) which could help gate our CI process. We have the test setup designed to be easily reproduced, and can work together on getting this setup, assuming we find a machine which this can run on. With this feature enabled in our CI, we’d be able to guarantee that it continues to work (and if it fails, it should be an easy fix). Thanks, Eric From: "Larrew, Jesse" <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Date: Tuesday, May 1, 2018 at 10:28 AM To: Eric Ernst <eric.ernst@intel.com<mailto:eric.ernst@intel.com>> Cc: Jon Olson <jonolson@google.com<mailto:jonolson@google.com>>, "Singh, Brijesh" <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>, "Kaplan, David" <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>, "kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>" <kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>>, "Woller, Thomas" <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Hi Eric, I got SEV working with the latest 0.0.1 kata runtime. Is there still a chance of getting this in before the 1.0 release on the 22nd? Or are we looking at 1.1.0 at this point? Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Saturday, April 7, 2018 9:18 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Cc: Jon Olson <jonolson@google.com<mailto:jonolson@google.com>>; Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) There isn’t really a deadline. While we are still discussing release cadence for Kata, this seems like a nice feature to get in, perhaps after our initial 1.0 release (targeting ~June 1). Eric On Apr 7, 2018, at 6:22 PM, Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote: Hi Eric, I’m seeking internal approval to contribute. Do you have a deadline for a decision? Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Friday, April 6, 2018 2:08 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Jon Olson <jonolson@google.com<mailto:jonolson@google.com>> Cc: Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Jesse, I wanted to follow up here… I think this is a pretty exciting feature and I wanted to see what next steps are. Is this something that you’re planning to or can start contributing to the Kata project? Thanks Eric From: "Larrew, Jesse" <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Date: Friday, February 23, 2018 at 7:11 PM To: Jon Olson <jonolson@google.com<mailto:jonolson@google.com>> Cc: "Singh, Brijesh" <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>, "Kaplan, David" <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>, "kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>" <kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>>, "Woller, Thomas" <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) From: Jon Olson [mailto:jonolson@google.com] Sent: Friday, February 23, 2018 6:58 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Cc: Castelino, Manohar R <manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) On Fri, Feb 23, 2018 at 4:42 PM Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote:
From: Castelino, Manohar R [mailto:manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>] Sent: Friday, February 23, 2018 3:53 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>> Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; kata- dev@lists.katacontainers.io<mailto:dev@lists.katacontainers.io> Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work?
Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory
For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9
-manohar
Hi Manohar, SEV also requires the guest memory to be pre-allocated and pinned [1], so that's not a problem. As long as the PF drivers in the guest are using the dma apis, everything should continue to work. That's surprising -- I can see where reclaim would be challenging without something like a balloon, but why must the be initially backed? What happens if you leave a page unbacked and attempt to lazily back it on an EPT fault? [JDL] You’re right Jon. I misspoke above. The memory only needs to be pinned; the backing pages can be faulted in on demand. Sorry for the confusion. Similarly, the reverse ballooning patches should also work with SEV. In fact, I would argue that SEV compliments this feature by ensuring that physical page contents aren't exposed to the host when the guest uses MADV_FREE. I've CC'ed our KVM expert, Brijesh Singh, just in case he sees something that I missed. Sincerely, Jesse [1] In order to ensure that memory blocks with identical data will encrypt to different ciphertext, SEV mixes the physical address into the encryption algorithm. As a result, if a page of memory is moved to a different physical address, it will not decrypt properly. This also defeats block-move attacks on the guest memory, but it also requires all guest memory to be pinned.
Hi Jesse, The normal workflow for submitting PRs to the kata repos is pretty much the standard github way – you fork the report, make yourself a branch in your fork, do the work there, and push to your fork on github. The github website will then show you a ‘make pull request’ dialog box on both (iirc) your or the main repo home page, and you submit via that (well, that is what I do ;-). Apart from…. The runtime repo. There is a golang path dependency in the virtcontainers subdir, which means you cannot build it inside your own fork ☹. So the workflow there is (and others please correct me if you have a better way) - Clone the main repo - Make a branch in the main repo (on your local machine) - Work within that branch - Fork the main repo - Add your fork as a remote to your main repo clone - Push your working branch to *your* forked repo (and definitely not back to the main repo ;-) - And then follow the PR submit process above. I ran into this yesterday ;-), and thought we had documented it – but, I could not find it in either the kata or CC repos. I somebody knows if we do have that documented, shout! Or, I’ll see if I can add it (but, tbh, it won’t be this week, and then I think it will fall off my radar! I guess I’ll go open an Issue right now at least…) Graham From: Larrew, Jesse [mailto:Jesse.Larrew@amd.com] Sent: Tuesday, June 19, 2018 6:57 PM To: Ernst, Eric <eric.ernst@intel.com> Cc: Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; Woller, Thomas <thomas.woller@amd.com>; kata-dev@lists.katacontainers.io Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Hi Eric, We’re still working on setting up the CI server, but we’ve decided to make the SEV patches available on our github repo so folks can kick the tires, so to speak. The SEV patches require a new dependency in the vendor tree and updates a few others: 1. intel-go/cpuid (new dependency), 2. intel/govmm, 3. virtcontainers To submit a PR to kata-runtime, should I first fork each of the above projects and submit PRs against them, then point to those PRs when I submit to kata? This is the first Go project that I’ve contributed to, so I’m not sure what the protocol is. Any tips would be helpful. Thanks! Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Wednesday, May 9, 2018 3:45 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Cc: Jon Olson <jonolson@google.com<mailto:jonolson@google.com>>; Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Hey Jesse, I had the chance to meet Brent @ Red Hat Summit yesterday, and this was a good reminder to reach out regarding SEV support in Kata. First, as expressed to Brent, this is a sweet use case. Second, one of the things that has come to mind for me is how we can verify. Our CI is running on a mix of baremetal machines for metrics and machines in the cloud (via Azure) for functional testing. For other architectures, and including this feature, it’d be best to have this exercised in CI. Basically, I’d want to make sure we can replicate the CI on a single AMD Epyc (or other SEV enabled system) which could help gate our CI process. We have the test setup designed to be easily reproduced, and can work together on getting this setup, assuming we find a machine which this can run on. With this feature enabled in our CI, we’d be able to guarantee that it continues to work (and if it fails, it should be an easy fix). Thanks, Eric From: "Larrew, Jesse" <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Date: Tuesday, May 1, 2018 at 10:28 AM To: Eric Ernst <eric.ernst@intel.com<mailto:eric.ernst@intel.com>> Cc: Jon Olson <jonolson@google.com<mailto:jonolson@google.com>>, "Singh, Brijesh" <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>, "Kaplan, David" <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>, "kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>" <kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>>, "Woller, Thomas" <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Hi Eric, I got SEV working with the latest 0.0.1 kata runtime. Is there still a chance of getting this in before the 1.0 release on the 22nd? Or are we looking at 1.1.0 at this point? Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Saturday, April 7, 2018 9:18 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Cc: Jon Olson <jonolson@google.com<mailto:jonolson@google.com>>; Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) There isn’t really a deadline. While we are still discussing release cadence for Kata, this seems like a nice feature to get in, perhaps after our initial 1.0 release (targeting ~June 1). Eric On Apr 7, 2018, at 6:22 PM, Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote: Hi Eric, I’m seeking internal approval to contribute. Do you have a deadline for a decision? Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com] Sent: Friday, April 6, 2018 2:08 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Jon Olson <jonolson@google.com<mailto:jonolson@google.com>> Cc: Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Jesse, I wanted to follow up here… I think this is a pretty exciting feature and I wanted to see what next steps are. Is this something that you’re planning to or can start contributing to the Kata project? Thanks Eric From: "Larrew, Jesse" <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Date: Friday, February 23, 2018 at 7:11 PM To: Jon Olson <jonolson@google.com<mailto:jonolson@google.com>> Cc: "Singh, Brijesh" <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>, "Kaplan, David" <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>, "kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>" <kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>>, "Woller, Thomas" <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) From: Jon Olson [mailto:jonolson@google.com] Sent: Friday, February 23, 2018 6:58 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Cc: Castelino, Manohar R <manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) On Fri, Feb 23, 2018 at 4:42 PM Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> wrote:
From: Castelino, Manohar R [mailto:manohar.r.castelino@intel.com<mailto:manohar.r.castelino@intel.com>] Sent: Friday, February 23, 2018 3:53 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>; Samuel Ortiz <sameo@linux.intel.com<mailto:sameo@linux.intel.com>> Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; kata- dev@lists.katacontainers.io<mailto:dev@lists.katacontainers.io> Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work?
Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory
For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9
-manohar
Hi Manohar, SEV also requires the guest memory to be pre-allocated and pinned [1], so that's not a problem. As long as the PF drivers in the guest are using the dma apis, everything should continue to work. That's surprising -- I can see where reclaim would be challenging without something like a balloon, but why must the be initially backed? What happens if you leave a page unbacked and attempt to lazily back it on an EPT fault? [JDL] You’re right Jon. I misspoke above. The memory only needs to be pinned; the backing pages can be faulted in on demand. Sorry for the confusion. Similarly, the reverse ballooning patches should also work with SEV. In fact, I would argue that SEV compliments this feature by ensuring that physical page contents aren't exposed to the host when the guest uses MADV_FREE. I've CC'ed our KVM expert, Brijesh Singh, just in case he sees something that I missed. Sincerely, Jesse [1] In order to ensure that memory blocks with identical data will encrypt to different ciphertext, SEV mixes the physical address into the encryption algorithm. As a result, if a page of memory is moved to a different physical address, it will not decrypt properly. This also defeats block-move attacks on the guest memory, but it also requires all guest memory to be pinned. --------------------------------------------------------------------- Intel Corporation (UK) Limited Registered No. 1134945 (England) Registered Office: Pipers Way, Swindon SN3 1RJ VAT No: 860 2173 47 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
This issue only occurs if you do *not* build the runtime in $GOPATH/src/ github.com/kata-containers/runtime. Hence, build if there if you can :) See: https://github.com/kata-containers/runtime/issues/430 Cheers, James 2018-06-20 10:04 GMT+01:00 Whaley, Graham <graham.whaley@intel.com>:
Hi Jesse,
The normal workflow for submitting PRs to the kata repos is pretty much the standard github way – you fork the report, make yourself a branch in your fork, do the work there, and push to your fork on github. The github website will then show you a ‘make pull request’ dialog box on both (iirc) your or the main repo home page, and you submit via that (well, that is what I do ;-).
*Apart* from…. The runtime repo. There is a golang path dependency in the virtcontainers subdir, which means you cannot build it inside your own fork L. So the workflow there is (and others please correct me if you have a better way)
- Clone the main repo
- Make a branch in the main repo (on your local machine)
- Work within that branch
- Fork the main repo
- Add your fork as a remote to your main repo clone
- Push your working branch to **your** forked repo (and definitely not back to the main repo ;-)
- And then follow the PR submit process above.
I ran into this yesterday ;-), and thought we had documented it – but, I could not find it in either the kata or CC repos. I somebody knows if we do have that documented, shout! Or, I’ll see if I can add it (but, tbh, it won’t be this week, and then I think it will fall off my radar! I guess I’ll go open an Issue right now at least…)
Graham
*From:* Larrew, Jesse [mailto:Jesse.Larrew@amd.com] *Sent:* Tuesday, June 19, 2018 6:57 PM *To:* Ernst, Eric <eric.ernst@intel.com> *Cc:* Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David < David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; Woller, Thomas <thomas.woller@amd.com>; kata-dev@lists.katacontainers.io
*Subject:* Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Hi Eric,
We’re still working on setting up the CI server, but we’ve decided to make the SEV patches available on our github repo so folks can kick the tires, so to speak.
The SEV patches require a new dependency in the vendor tree and updates a few others:
1. intel-go/cpuid (new dependency), 2. intel/govmm, 3. virtcontainers
To submit a PR to kata-runtime, should I first fork each of the above projects and submit PRs against them, then point to those PRs when I submit to kata? This is the first Go project that I’ve contributed to, so I’m not sure what the protocol is. Any tips would be helpful. Thanks!
Sincerely,
Jesse
*From:* Ernst, Eric [mailto:eric.ernst@intel.com <eric.ernst@intel.com>] *Sent:* Wednesday, May 9, 2018 3:45 PM *To:* Larrew, Jesse <Jesse.Larrew@amd.com> *Cc:* Jon Olson <jonolson@google.com>; Singh, Brijesh < brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Woller, Thomas <thomas.woller@amd.com> *Subject:* Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Hey Jesse,
I had the chance to meet Brent @ Red Hat Summit yesterday, and this was a good reminder to reach out regarding SEV support in Kata.
First, as expressed to Brent, this is a sweet use case.
Second, one of the things that has come to mind for me is how we can verify.
Our CI is running on a mix of baremetal machines for metrics and machines in the cloud (via Azure) for functional testing. For other architectures, and including this feature, it’d be best to have this exercised in CI. Basically, I’d want to make sure we can replicate the CI on a single AMD Epyc (or other SEV enabled system) which could help gate our CI process. We have the test setup designed to be easily reproduced, and can work together on getting this setup, assuming we find a machine which this can run on.
With this feature enabled in our CI, we’d be able to guarantee that it continues to work (and if it fails, it should be an easy fix).
Thanks, Eric
*From: *"Larrew, Jesse" <Jesse.Larrew@amd.com> *Date: *Tuesday, May 1, 2018 at 10:28 AM *To: *Eric Ernst <eric.ernst@intel.com> *Cc: *Jon Olson <jonolson@google.com>, "Singh, Brijesh" < brijesh.singh@amd.com>, "Kaplan, David" <David.Kaplan@amd.com>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com>, "kata-dev@lists. katacontainers.io" <kata-dev@lists.katacontainers.io>, "Woller, Thomas" < thomas.woller@amd.com> *Subject: *RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Hi Eric,
I got SEV working with the latest 0.0.1 kata runtime. Is there still a chance of getting this in before the 1.0 release on the 22nd? Or are we looking at 1.1.0 at this point?
Sincerely,
Jesse
*From:* Ernst, Eric [mailto:eric.ernst@intel.com <eric.ernst@intel.com>] *Sent:* Saturday, April 7, 2018 9:18 PM *To:* Larrew, Jesse <Jesse.Larrew@amd.com> *Cc:* Jon Olson <jonolson@google.com>; Singh, Brijesh < brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Woller, Thomas <thomas.woller@amd.com> *Subject:* Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
There isn’t really a deadline. While we are still discussing release cadence for Kata, this seems like a nice feature to get in, perhaps after our initial 1.0 release (targeting ~June 1).
Eric
On Apr 7, 2018, at 6:22 PM, Larrew, Jesse <Jesse.Larrew@amd.com> wrote:
Hi Eric,
I’m seeking internal approval to contribute. Do you have a deadline for a decision?
Sincerely,
Jesse
*From:* Ernst, Eric [mailto:eric.ernst@intel.com <eric.ernst@intel.com>] *Sent:* Friday, April 6, 2018 2:08 PM *To:* Larrew, Jesse <Jesse.Larrew@amd.com>; Jon Olson <jonolson@google.com
*Cc:* Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David < David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Woller, Thomas <thomas.woller@amd.com> *Subject:* Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
I wanted to follow up here… I think this is a pretty exciting feature and I wanted to see what next steps are. Is this something that you’re planning to or can start contributing to the Kata project?
Thanks Eric
*From: *"Larrew, Jesse" <Jesse.Larrew@amd.com> *Date: *Friday, February 23, 2018 at 7:11 PM *To: *Jon Olson <jonolson@google.com> *Cc: *"Singh, Brijesh" <brijesh.singh@amd.com>, "Kaplan, David" < David.Kaplan@amd.com>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com>, "kata-dev@lists.katacontainers.io" <kata-dev@lists.katacontainers.io>, "Woller, Thomas" <thomas.woller@amd.com> *Subject: *Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
*From:* Jon Olson [mailto:jonolson@google.com <jonolson@google.com>] *Sent:* Friday, February 23, 2018 6:58 PM *To:* Larrew, Jesse <Jesse.Larrew@amd.com> *Cc:* Castelino, Manohar R <manohar.r.castelino@intel.com>; Samuel Ortiz < sameo@linux.intel.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Woller, Thomas < thomas.woller@amd.com> *Subject:* Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
On Fri, Feb 23, 2018 at 4:42 PM Larrew, Jesse <Jesse.Larrew@amd.com> wrote:
From: Castelino, Manohar R [mailto:manohar.r.castelino@intel.com] Sent: Friday, February 23, 2018 3:53 PM To: Larrew, Jesse <Jesse.Larrew@amd.com>; Samuel Ortiz <sameo@linux.intel.com> Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com>; Woller, Thomas <thomas.woller@amd.com>; Kaplan, David <David.Kaplan@amd.com>; kata- dev@lists.katacontainers.io Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work?
Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory
For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9
-manohar
Hi Manohar,
SEV also requires the guest memory to be pre-allocated and pinned [1], so that's not a problem. As long as the PF drivers in the guest are using the dma apis, everything should continue to work.
That's surprising -- I can see where reclaim would be challenging without something like a balloon, but why must the be initially backed? What happens if you leave a page unbacked and attempt to lazily back it on an EPT fault?
*[JDL] You’re right Jon. I misspoke above. The memory only needs to be pinned; the backing pages can be faulted in on demand. Sorry for the confusion.*
Similarly, the reverse ballooning patches should also work with SEV. In fact, I would argue that SEV compliments this feature by ensuring that physical page contents aren't exposed to the host when the guest uses MADV_FREE. I've CC'ed our KVM expert, Brijesh Singh, just in case he sees something that I missed.
Sincerely, Jesse
[1] In order to ensure that memory blocks with identical data will encrypt to different ciphertext, SEV mixes the physical address into the encryption algorithm. As a result, if a page of memory is moved to a different physical address, it will not decrypt properly. This also defeats block-move attacks on the guest memory, but it also requires all guest memory to be pinned.
--------------------------------------------------------------------- Intel Corporation (UK) Limited Registered No. 1134945 (England) Registered Office: Pipers Way, Swindon SN3 1RJ VAT No: 860 2173 47
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
-- James --- https://katacontainers.io/ | https://github.com/kata-containers <https://github.com/clearcontainers> Open Source Technology Center Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ.
Hi Jesse, 2018-06-19 18:57 GMT+01:00 Larrew, Jesse <Jesse.Larrew@amd.com>:
Hi Eric,
We’re still working on setting up the CI server, but we’ve decided to make the SEV patches available on our github repo so folks can kick the tires, so to speak.
\o/ - great!
The SEV patches require a new dependency in the vendor tree and updates a few others:
1. intel-go/cpuid (new dependency), 2. intel/govmm, 3. virtcontainers
To submit a PR to kata-runtime, should I first fork each of the above projects and submit PRs against them, then point to those PRs when I submit to kata? This is the first Go project that I’ve contributed to, so I’m not sure what the protocol is. Any tips would be helpful. Thanks!
For (1) and (2), yes, you'll need to click "fork" on https://github.com/intel-go/cpuid and https://github.com/intel/govmm, and then raise PRs on those two projects. Once both those PRs have landed, since virtcontainers is now part of the Kata runtime you can then raise a PR on https://github.com/kata-containers/runtime for the changes you need to make to https://github.com/kata-containers/runtime/tree/master/virtcontainers. As part of that change, you'll need to update the runtime vendoring to pull in (1) and (2): - Add "github.com/intel-go/cpuid" to Gopkg.toml to pull in your upstream changes. See https://github.com/kata-containers/community/blob/master/VENDORING.md#vendor... - Update the commit for "github.com/intel/govmm" in Gopkg.toml to pull in your upstream changes. See https://github.com/kata-containers/community/blob/master/VENDORING.md#update... Note that you can *probably* get away with updating the Gopkg.toml file once for both changes (meaning you only need to run "dep ensure" once) and having all the changes on a single commit. HTH. Cheers, James
Sincerely,
Jesse
*From:* Ernst, Eric [mailto:eric.ernst@intel.com] *Sent:* Wednesday, May 9, 2018 3:45 PM
*To:* Larrew, Jesse <Jesse.Larrew@amd.com> *Cc:* Jon Olson <jonolson@google.com>; Singh, Brijesh < brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Woller, Thomas <thomas.woller@amd.com> *Subject:* Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Hey Jesse,
I had the chance to meet Brent @ Red Hat Summit yesterday, and this was a good reminder to reach out regarding SEV support in Kata.
First, as expressed to Brent, this is a sweet use case.
Second, one of the things that has come to mind for me is how we can verify.
Our CI is running on a mix of baremetal machines for metrics and machines in the cloud (via Azure) for functional testing. For other architectures, and including this feature, it’d be best to have this exercised in CI. Basically, I’d want to make sure we can replicate the CI on a single AMD Epyc (or other SEV enabled system) which could help gate our CI process. We have the test setup designed to be easily reproduced, and can work together on getting this setup, assuming we find a machine which this can run on.
With this feature enabled in our CI, we’d be able to guarantee that it continues to work (and if it fails, it should be an easy fix).
Thanks, Eric
*From: *"Larrew, Jesse" <Jesse.Larrew@amd.com> *Date: *Tuesday, May 1, 2018 at 10:28 AM *To: *Eric Ernst <eric.ernst@intel.com> *Cc: *Jon Olson <jonolson@google.com>, "Singh, Brijesh" < brijesh.singh@amd.com>, "Kaplan, David" <David.Kaplan@amd.com>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com>, "kata-dev@lists. katacontainers.io" <kata-dev@lists.katacontainers.io>, "Woller, Thomas" < thomas.woller@amd.com> *Subject: *RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Hi Eric,
I got SEV working with the latest 0.0.1 kata runtime. Is there still a chance of getting this in before the 1.0 release on the 22nd? Or are we looking at 1.1.0 at this point?
Sincerely,
Jesse
*From:* Ernst, Eric [mailto:eric.ernst@intel.com <eric.ernst@intel.com>] *Sent:* Saturday, April 7, 2018 9:18 PM *To:* Larrew, Jesse <Jesse.Larrew@amd.com> *Cc:* Jon Olson <jonolson@google.com>; Singh, Brijesh < brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Woller, Thomas <thomas.woller@amd.com> *Subject:* Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
There isn’t really a deadline. While we are still discussing release cadence for Kata, this seems like a nice feature to get in, perhaps after our initial 1.0 release (targeting ~June 1).
Eric
On Apr 7, 2018, at 6:22 PM, Larrew, Jesse <Jesse.Larrew@amd.com> wrote:
Hi Eric,
I’m seeking internal approval to contribute. Do you have a deadline for a decision?
Sincerely,
Jesse
*From:* Ernst, Eric [mailto:eric.ernst@intel.com <eric.ernst@intel.com>] *Sent:* Friday, April 6, 2018 2:08 PM *To:* Larrew, Jesse <Jesse.Larrew@amd.com>; Jon Olson <jonolson@google.com
*Cc:* Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David < David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Woller, Thomas <thomas.woller@amd.com> *Subject:* Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
I wanted to follow up here… I think this is a pretty exciting feature and I wanted to see what next steps are. Is this something that you’re planning to or can start contributing to the Kata project?
Thanks Eric
*From: *"Larrew, Jesse" <Jesse.Larrew@amd.com> *Date: *Friday, February 23, 2018 at 7:11 PM *To: *Jon Olson <jonolson@google.com> *Cc: *"Singh, Brijesh" <brijesh.singh@amd.com>, "Kaplan, David" < David.Kaplan@amd.com>, "Hollingsworth, Brent" <brent.hollingsworth@amd.com>, "kata-dev@lists.katacontainers.io" <kata-dev@lists.katacontainers.io>, "Woller, Thomas" <thomas.woller@amd.com> *Subject: *Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
*From:* Jon Olson [mailto:jonolson@google.com <jonolson@google.com>] *Sent:* Friday, February 23, 2018 6:58 PM *To:* Larrew, Jesse <Jesse.Larrew@amd.com> *Cc:* Castelino, Manohar R <manohar.r.castelino@intel.com>; Samuel Ortiz < sameo@linux.intel.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; kata-dev@lists.katacontainers.io; Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Woller, Thomas < thomas.woller@amd.com> *Subject:* Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
On Fri, Feb 23, 2018 at 4:42 PM Larrew, Jesse <Jesse.Larrew@amd.com> wrote:
From: Castelino, Manohar R [mailto:manohar.r.castelino@intel.com] Sent: Friday, February 23, 2018 3:53 PM To: Larrew, Jesse <Jesse.Larrew@amd.com>; Samuel Ortiz <sameo@linux.intel.com> Cc: Hollingsworth, Brent <brent.hollingsworth@amd.com>; Woller, Thomas <thomas.woller@amd.com>; Kaplan, David <David.Kaplan@amd.com>; kata- dev@lists.katacontainers.io Subject: RE: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV)
Jesse,
On EPYC, our IOMMU doesn't yet support SEV, so DMA to/from devices needs to be done using unencrypted pages. It was easy to implement this using the bounce buffers provided by SWIOTLB. As you guessed, a quick change to vring_use_dma_api() got virtio support working properly:
Clear Containers today supports direct device assignment via SRIOV. This requires pre-allocation and pinning of VM memory. Will this continue to work?
Also we have been working on reverse ballooning. i.e. free unused memory from the VM back to the host. Is there is a way to get this to work with encrypted memory
For more details about the patches https://gist.github.com/sboeuf/fc71f0218a81997251ee0d7668df2bd9
-manohar
Hi Manohar,
SEV also requires the guest memory to be pre-allocated and pinned [1], so that's not a problem. As long as the PF drivers in the guest are using the dma apis, everything should continue to work.
That's surprising -- I can see where reclaim would be challenging without something like a balloon, but why must the be initially backed? What happens if you leave a page unbacked and attempt to lazily back it on an EPT fault?
*[JDL] You’re right Jon. I misspoke above. The memory only needs to be pinned; the backing pages can be faulted in on demand. Sorry for the confusion.*
Similarly, the reverse ballooning patches should also work with SEV. In fact, I would argue that SEV compliments this feature by ensuring that physical page contents aren't exposed to the host when the guest uses MADV_FREE. I've CC'ed our KVM expert, Brijesh Singh, just in case he sees something that I missed.
Sincerely, Jesse
[1] In order to ensure that memory blocks with identical data will encrypt to different ciphertext, SEV mixes the physical address into the encryption algorithm. As a result, if a page of memory is moved to a different physical address, it will not decrypt properly. This also defeats block-move attacks on the guest memory, but it also requires all guest memory to be pinned.
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
-- James --- https://katacontainers.io/ | https://github.com/kata-containers <https://github.com/clearcontainers> Open Source Technology Center Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ.
Hi James, That’s very helpful. Thanks! Sincerely, Jesse From: Hunt, James O [mailto:james.o.hunt@intel.com] Sent: Wednesday, June 20, 2018 4:06 AM To: Larrew, Jesse <Jesse.Larrew@amd.com> Cc: Ernst, Eric <eric.ernst@intel.com>; Singh, Brijesh <brijesh.singh@amd.com>; Kaplan, David <David.Kaplan@amd.com>; Hollingsworth, Brent <brent.hollingsworth@amd.com>; Woller, Thomas <thomas.woller@amd.com>; kata-dev@lists.katacontainers.io Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Hi Jesse, 2018-06-19 18:57 GMT+01:00 Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>>: Hi Eric, We’re still working on setting up the CI server, but we’ve decided to make the SEV patches available on our github repo so folks can kick the tires, so to speak. \o/ - great! The SEV patches require a new dependency in the vendor tree and updates a few others: 1. intel-go/cpuid (new dependency), 2. intel/govmm, 3. virtcontainers To submit a PR to kata-runtime, should I first fork each of the above projects and submit PRs against them, then point to those PRs when I submit to kata? This is the first Go project that I’ve contributed to, so I’m not sure what the protocol is. Any tips would be helpful. Thanks! For (1) and (2), yes, you'll need to click "fork" on https://github.com/intel-go/cpuid and https://github.com/intel/govmm, and then raise PRs on those two projects. Once both those PRs have landed, since virtcontainers is now part of the Kata runtime you can then raise a PR on https://github.com/kata-containers/runtime for the changes you need to make to https://github.com/kata-containers/runtime/tree/master/virtcontainers. As part of that change, you'll need to update the runtime vendoring to pull in (1) and (2): - Add "github.com/intel-go/cpuid<http://github.com/intel-go/cpuid>" to Gopkg.toml to pull in your upstream changes. See https://github.com/kata-containers/community/blob/master/VENDORING.md#vendor... - Update the commit for "github.com/intel/govmm<http://github.com/intel/govmm>" in Gopkg.toml to pull in your upstream changes. See https://github.com/kata-containers/community/blob/master/VENDORING.md#update... Note that you can *probably* get away with updating the Gopkg.toml file once for both changes (meaning you only need to run "dep ensure" once) and having all the changes on a single commit. HTH. Cheers, James Sincerely, Jesse From: Ernst, Eric [mailto:eric.ernst@intel.com<mailto:eric.ernst@intel.com>] Sent: Wednesday, May 9, 2018 3:45 PM To: Larrew, Jesse <Jesse.Larrew@amd.com<mailto:Jesse.Larrew@amd.com>> Cc: Jon Olson <jonolson@google.com<mailto:jonolson@google.com>>; Singh, Brijesh <brijesh.singh@amd.com<mailto:brijesh.singh@amd.com>>; Kaplan, David <David.Kaplan@amd.com<mailto:David.Kaplan@amd.com>>; Hollingsworth, Brent <brent.hollingsworth@amd.com<mailto:brent.hollingsworth@amd.com>>; kata-dev@lists.katacontainers.io<mailto:kata-dev@lists.katacontainers.io>; Woller, Thomas <thomas.woller@amd.com<mailto:thomas.woller@amd.com>> Subject: Re: [kata-dev] Kata with AMD Secure Encrypted Virtualization (SEV) Hey Jesse, I had the chance to meet Brent @ Red Hat Summit yesterday, and this was a good reminder to reach out regarding SEV support in Kata. First, as expressed to Brent, this is a sweet use case. Second, one of the things that has come to mind for me is how we can verify. Our CI is running on a mix of baremetal machines for metrics and machines in the cloud (via Azure) for functional testing. For other architectures, and including this feature, it’d be best to have this exercised in CI. Basically, I’d want to make sure we can replicate the CI on a single AMD Epyc (or other SEV enabled system) which could help gate our CI process. We have the test setup designed to be easily reproduced, and can work together on getting this setup, assuming we find a machine which this can run on. With this feature enabled in our CI, we’d be able to guarantee that it continues to work (and if it fails, it should be an easy fix). Thanks, Eric
participants (12)
-
Boeuf, Sebastien
-
Castelino, Manohar R
-
EJ Campbell
-
Ernst, Eric
-
Hunt, James O
-
Jon Olson
-
Larrew, Jesse
-
Ricardo Aravena
-
Samuel Ortiz
-
Tao Peng
-
Whaley, Graham
-
Xu Wang