[kata-dev] /dev/urandom or /dev/random

Jose Carlos Venegas Munoz jose.carlos.venegas.munoz at intel.com
Tue Sep 25 00:57:41 UTC 2018


Manohar, the agent does not have any call to generate a random number
(we remove the only call it had long time ago, before add virio-rng,
for the a similar issue you described with the kernel + Go).

See:
https://github.com/kata-containers/agent/pull/279

In the past I've seen that what is getting blocked is the kernel at at
early boot (probably getting a random number with virio-rng). We were
able to reproduce something similar vexxhost VMs.  After the adding a
daemon to generate entropy in vexxhost this was fixed.
See:
https://github.com/kata-containers/runtime/pull/676#issuecomment-418812957

I could not reproduce this locally but probably is the same
issue. The dmesg logs from the VM  could give us more information.

Second, the VMs are pulling a lot of entropy at startup.
Given that the agent is not trying to get random numbers, I am not sure
if this is the kernel or systemd reducing the amount of entropy in the
host.

For now we are a adding a configuration option to decide what host
entropy source use. Also from administrator perspective allow limit
the amount of entropy is a good option.

-
Carlos

On Mon, Sep 24, 2018 at 11:59:51PM +0000, Castelino, Manohar R wrote:
> We had seen something similar to thison another project when we switched from
> go 1.8 to 1.9.
> 
> On Linux, Go now calls the getrandom system call without the GRND_NONBLOCK
> flag; it will now block until the kernel has sufficient randomness. On kernels
> predating the getrandom system call, Go continues to read from /dev/urandom.
> 
> 
> We had to implement something along the lines of https://github.com/
> ciao-project/ciao/commit/30ddabb9e201a7985100750e64172ae4b518d1e6 to work
> around this issue.
> 
> Is something like this happening within the VM in the agent which is written in
> go?
> 
> 
> 
> We had to modify our go code to sample
> On Mon, 2018-09-24 at 16:40 -0700, Jon Olson via kata-dev wrote:
> 
>     +tytso at mit.edu -- Ted, I know you had some thoughts on seeding virtio-rng
>     from /dev/urandom (not sure the listserv will let you post, but it should
>     catch at least Sebastien and I). 
> 
>     Jon
> 
> 
>     On Mon, Sep 24, 2018 at 3:21 PM Boeuf, Sebastien <sebastien.boeuf at intel.com
>     > wrote:
> 
>         Hi folks,
> 
>         Following the discussion from this morning during the Arch committee
>         meeting, I have investigated the sporadic issue https://github.com/
>         kata-containers/runtime/issues/702 preventing from starting some Kata
>         containers.
> 
>         I have been able to reproduce it pretty easily and I have identified it
>         is related to the entropy of the host being almost entirely consumed by
>         the first containers, leaving no time for the host to regenerate new
>         entropy for the next containers.
> 
>         Currently, the virtio-rng device exposed by Qemu relies on /dev/random
>         on the host, and because this device will block any access to it until
>         some more entropy is ready, that's why we end up getting the timeout
>         from the gRPC client as the agent is not ready, hence the gRPC server
>         does not run yet (the guest is blocked on getting new entropy from /dev
>         /random).
>         One way to workaround this issue is to tweak the parameters of the
>         virtio-rng device such as max-bytes=10, limiting the amount of entropy
>         that can be consumed by the guest each period. This means that starting
>         one container will not consume all host's entropy, but eventually, if
>         we run a lot of containers, we'll be very likely to hit this same
>         issue.
> 
>         The long term solution seems to rely on /dev/urandom device as this one
>         will not block if no entropy is ready yet. But from what we can read
>         online, it seems that some people have some security concerns about it.
>         I'd like to understand if those worries are valid or not, and if we
>         should keep thinking about another way to fix this issue.
> 
>         Thanks,
>         Sebastien
>         _______________________________________________
>         kata-dev mailing list
>         kata-dev at lists.katacontainers.io
>         http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
> 
>     _______________________________________________
> 
>     kata-dev mailing list
> 
>     kata-dev at lists.katacontainers.io
> 
>     http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
> 
> 
> 



> _______________________________________________
> kata-dev mailing list
> kata-dev at lists.katacontainers.io
> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev




More information about the kata-dev mailing list