vsock & network namespaces in Kata
Hi folks, I'm working on vsock on Linux and I'm adding the network namespace support on it[1]. The main changes are the following: - Guest: can talk with the host packets (send and receive packets) in the default init_netns (maybe we can make it configurable) - Host: can talk with the guest only in the same netns where the VMM is running (e.g. if we start qemu from ns1, all packets are received only in ns1) - Host: assign the same CID of VMs running in different network namespaces - Nested VMs (available from Linux 5.5): isolate host applications from guest applications IIUC you have a runtime running in the host that communicates using vsock with an agent in the guest. I have few questions: 1. Is the VMM (e.g. QEMU) running in a network namespace? 2. Is the host application, that use vsock to communicate with the guest, running in the same network namespace? 3. Is the agent in the guest running outside a netns (default init_netns)? If you have any other suggestions, I'd be happy to hear them. Thanks, Stefano
Hi Stefano
- Host: assign the same CID of VMs running in different network namespaces
this means two VMs running in different namespace can use the same CID? currently we use VHOST_VSOCK_SET_GUEST_CID to get a unique context ID, is this going to change?
1. Is the VMM (e.g. QEMU) running in a network namespace?
yes, see https://github.com/kata-containers/runtime/blob/62cd08044d78912228d9dc800cb1...
2. Is the host application, that use vsock to communicate with the guest, running in the same network namespace?
afaik, no
3. Is the agent in the guest running outside a netns (default init_netns)?
yes ________________________________ From: Stefano Garzarella <sgarzare@redhat.com> Sent: Wednesday, December 4, 2019 4:32 AM To: kata-dev@lists.katacontainers.io <kata-dev@lists.katacontainers.io> Subject: [kata-dev] vsock & network namespaces in Kata Hi folks, I'm working on vsock on Linux and I'm adding the network namespace support on it[1]. The main changes are the following: - Guest: can talk with the host packets (send and receive packets) in the default init_netns (maybe we can make it configurable) - Host: can talk with the guest only in the same netns where the VMM is running (e.g. if we start qemu from ns1, all packets are received only in ns1) - Host: assign the same CID of VMs running in different network namespaces - Nested VMs (available from Linux 5.5): isolate host applications from guest applications IIUC you have a runtime running in the host that communicates using vsock with an agent in the guest. I have few questions: 1. Is the VMM (e.g. QEMU) running in a network namespace? 2. Is the host application, that use vsock to communicate with the guest, running in the same network namespace? 3. Is the agent in the guest running outside a netns (default init_netns)? If you have any other suggestions, I'd be happy to hear them. Thanks, Stefano _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
The architecture doc covers some of your questions I believe [1]: https://github.com/kata-containers/documentation/blob/master/design/architec... Cheers, James [1] - although not all - we need to refresh that doc soon... Le mer. 4 déc. 2019 à 15:55, Montes, Julio <julio.montes@intel.com> a écrit :
Hi Stefano
- Host: assign the same CID of VMs running in different network namespaces
this means two VMs running in different namespace can use the same CID? currently we use VHOST_VSOCK_SET_GUEST_CID to get a unique context ID, is this going to change?
1. Is the VMM (e.g. QEMU) running in a network namespace?
yes, see https://github.com/kata-containers/runtime/blob/62cd08044d78912228d9dc800cb1...
2. Is the host application, that use vsock to communicate with the guest, running in the same network namespace?
afaik, no
3. Is the agent in the guest running outside a netns (default init_netns)?
yes
------------------------------ *From:* Stefano Garzarella <sgarzare@redhat.com> *Sent:* Wednesday, December 4, 2019 4:32 AM *To:* kata-dev@lists.katacontainers.io <kata-dev@lists.katacontainers.io> *Subject:* [kata-dev] vsock & network namespaces in Kata
Hi folks, I'm working on vsock on Linux and I'm adding the network namespace support on it[1].
The main changes are the following: - Guest: can talk with the host packets (send and receive packets) in the default init_netns (maybe we can make it configurable) - Host: can talk with the guest only in the same netns where the VMM is running (e.g. if we start qemu from ns1, all packets are received only in ns1) - Host: assign the same CID of VMs running in different network namespaces - Nested VMs (available from Linux 5.5): isolate host applications from guest applications
IIUC you have a runtime running in the host that communicates using vsock with an agent in the guest. I have few questions: 1. Is the VMM (e.g. QEMU) running in a network namespace? 2. Is the host application, that use vsock to communicate with the guest, running in the same network namespace? 3. Is the agent in the guest running outside a netns (default init_netns)?
If you have any other suggestions, I'd be happy to hear them.
Thanks, Stefano
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev _______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
-- James --- https://katacontainers.io/ | https://github.com/kata-containers <https://github.com/clearcontainers> Open Source Technology Center Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ.
On Wed, Dec 04, 2019 at 04:20:12PM +0000, Hunt, James O wrote:
The architecture doc covers some of your questions I believe [1]:
https://github.com/kata-containers/documentation/blob/master/design/architec...
Thanks for the pointer! Stefano
On Wed, Dec 04, 2019 at 03:54:06PM +0000, Montes, Julio wrote:
Hi Stefano
Hi Julio
- Host: assign the same CID of VMs running in different network namespaces
this means two VMs running in different namespace can use the same CID?
Exactly.
currently we use VHOST_VSOCK_SET_GUEST_CID to get a unique context ID, is this going to change?
The only change is that the CID is unique in the network namespace domain.
1. Is the VMM (e.g. QEMU) running in a network namespace?
yes, see https://github.com/kata-containers/runtime/blob/62cd08044d78912228d9dc800cb1...
2. Is the host application, that use vsock to communicate with the guest, running in the same network namespace?
afaik, no
This could be a problem with the RFC that I sent, because we allow only the processes in the same netns of the VMM, to communicate with the guest. Do you think could be an easy change in the Kata runtime? I need to look better, but if you already know the answer you'll save me some time :)
3. Is the agent in the guest running outside a netns (default init_netns)?
yes
This is nice. Thank you very much, Stefano
Stefano,
Do you think could be an easy change in the Kata runtime?
I need to look better, but if you already know the answer you'll save me some time :)
ok, this means all kata components must be in the same network namespace. fortunately, kata-shim and qemu run in the same network namespace, so the change should be easy, just move kata-proxy and kata-runtime into the same net namespace ________________________________ From: Stefano Garzarella <sgarzare@redhat.com> Sent: Wednesday, December 4, 2019 11:05 AM To: Montes, Julio <julio.montes@intel.com> Cc: kata-dev@lists.katacontainers.io <kata-dev@lists.katacontainers.io> Subject: Re: [kata-dev] vsock & network namespaces in Kata On Wed, Dec 04, 2019 at 03:54:06PM +0000, Montes, Julio wrote:
Hi Stefano
Hi Julio
- Host: assign the same CID of VMs running in different network namespaces
this means two VMs running in different namespace can use the same CID?
Exactly.
currently we use VHOST_VSOCK_SET_GUEST_CID to get a unique context ID, is this going to change?
The only change is that the CID is unique in the network namespace domain.
1. Is the VMM (e.g. QEMU) running in a network namespace?
yes, see https://github.com/kata-containers/runtime/blob/62cd08044d78912228d9dc800cb1...
2. Is the host application, that use vsock to communicate with the guest, running in the same network namespace?
afaik, no
This could be a problem with the RFC that I sent, because we allow only the processes in the same netns of the VMM, to communicate with the guest. Do you think could be an easy change in the Kata runtime? I need to look better, but if you already know the answer you'll save me some time :)
3. Is the agent in the guest running outside a netns (default init_netns)?
yes
This is nice. Thank you very much, Stefano
On Wed, 2019-12-04 at 17:36 +0000, Montes, Julio wrote: Stefano,
Do you think could be an easy change in the Kata runtime?
I need to look better, but if you already know the answer you'll save me some time :)
ok, this means all kata components must be in the same network namespace. fortunately, kata-shim and qemu run in the same network namespace, so the change should be easy, just move kata-proxy and kata-runtime into the same net namespace Well in case of vsock, we don't need kata-proxy, the shim being directly connected to the VM through vsock. And the kata-shim runs in the same netns as the VMM, so we're good from this perspective. In case of containerd-shimv2, we don't have kata-shim anymore, the shim being a simple thread part of the kata-runtime process, which means kata-runtime needs to enter the netns, and that's where it might be pretty complex. IIRC, Julio did some experiments to run the kata-runtime in a set of namespaces, but I don't think we merged it, did we? Sebastien ________________________________ From: Stefano Garzarella <sgarzare@redhat.com> Sent: Wednesday, December 4, 2019 11:05 AM To: Montes, Julio <julio.montes@intel.com> Cc: kata-dev@lists.katacontainers.io <kata-dev@lists.katacontainers.io> Subject: Re: [kata-dev] vsock & network namespaces in Kata On Wed, Dec 04, 2019 at 03:54:06PM +0000, Montes, Julio wrote:
Hi Stefano
Hi Julio
- Host: assign the same CID of VMs running in different network namespaces
this means two VMs running in different namespace can use the same CID?
Exactly.
currently we use VHOST_VSOCK_SET_GUEST_CID to get a unique context ID, is this going to change?
The only change is that the CID is unique in the network namespace domain.
1. Is the VMM (e.g. QEMU) running in a network namespace?
yes, see https://github.com/kata-containers/runtime/blob/62cd08044d78912228d9dc800cb1...
2. Is the host application, that use vsock to communicate with the guest, running in the same network namespace?
afaik, no
This could be a problem with the RFC that I sent, because we allow only the processes in the same netns of the VMM, to communicate with the guest. Do you think could be an easy change in the Kata runtime? I need to look better, but if you already know the answer you'll save me some time :)
3. Is the agent in the guest running outside a netns (default init_netns)?
yes
This is nice. Thank you very much, Stefano _______________________________________________ kata-dev mailing list <mailto:kata-dev@lists.katacontainers.io> kata-dev@lists.katacontainers.io <http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev> http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev --------------------------------------------------------------------- Intel Corporation SAS (French simplified joint stock company) Registered headquarters: "Les Montalets"- 2, rue de Paris, 92196 Meudon Cedex, France Registration Number: 302 456 199 R.C.S. NANTERRE Capital: 4,572,000 Euros This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
On Thu, Dec 05, 2019 at 08:07:23AM +0000, Boeuf, Sebastien wrote:
On Wed, 2019-12-04 at 17:36 +0000, Montes, Julio wrote:
Stefano,
Do you think could be an easy change in the Kata runtime?
I need to look better, but if you already know the answer you'll save me some time :)
ok, this means all kata components must be in the same network namespace.
fortunately, kata-shim and qemu run in the same network namespace, so the change should be easy, just move kata-proxy and kata-runtime into the same net namespace
Well in case of vsock, we don't need kata-proxy, the shim being directly connected to the VM through vsock. And the kata-shim runs in the same netns as the VMM, so we're good from this perspective.
Cool!
In case of containerd-shimv2, we don't have kata-shim anymore, the shim being a simple thread part of the kata-runtime process, which means kata-runtime needs to enter the netns, and that's where it might be pretty complex. IIRC, Julio did some experiments to run the kata-runtime in a set of namespaces, but I don't think we merged it, did we?
So, if it's complicated, as Tao suggested, we absolutely must provide a way to disable it or assign a vsock device to a netns regardless of the VMM's netns. Thanks, Stefano
On 2019/12/5 01:05, Stefano Garzarella wrote:
On Wed, Dec 04, 2019 at 03:54:06PM +0000, Montes, Julio wrote:
Hi Stefano
Hi Julio
- Host: assign the same CID of VMs running in different network namespaces
this means two VMs running in different namespace can use the same CID?
Exactly.
currently we use VHOST_VSOCK_SET_GUEST_CID to get a unique context ID, is this going to change?
The only change is that the CID is unique in the network namespace domain.
1. Is the VMM (e.g. QEMU) running in a network namespace?
yes, see https://github.com/kata-containers/runtime/blob/62cd08044d78912228d9dc800cb1...
2. Is the host application, that use vsock to communicate with the guest, running in the same network namespace?
afaik, no
This could be a problem with the RFC that I sent, because we allow only the processes in the same netns of the VMM, to communicate with the guest.
Do you think could be an easy change in the Kata runtime? I need to look better, but if you already know the answer you'll save me some time :) Hi Stefano,
While I understand the motivation of the change, do users have an option to opt out of the namespaced vsock communication? I'm considering a possible scenario that someone uses a single host daemon to manage all the guests like we did in the hyperd project. Then there is no way for such a daemon to communicate with guests with namespaced vsock. Cheers, Tao
3. Is the agent in the guest running outside a netns (default init_netns)?
yes
This is nice.
Thank you very much, Stefano
_______________________________________________ kata-dev mailing list kata-dev@lists.katacontainers.io http://lists.katacontainers.io/cgi-bin/mailman/listinfo/kata-dev
-- Into something rich and strange.
On Thu, Dec 05, 2019 at 10:39:08AM +0800, Peng Tao wrote:
On 2019/12/5 01:05, Stefano Garzarella wrote:
On Wed, Dec 04, 2019 at 03:54:06PM +0000, Montes, Julio wrote:
- Host: assign the same CID of VMs running in different network namespaces
this means two VMs running in different namespace can use the same CID?
Exactly.
currently we use VHOST_VSOCK_SET_GUEST_CID to get a unique context ID, is this going to change?
The only change is that the CID is unique in the network namespace domain.
1. Is the VMM (e.g. QEMU) running in a network namespace?
yes, see https://github.com/kata-containers/runtime/blob/62cd08044d78912228d9dc800cb1...
2. Is the host application, that use vsock to communicate with the guest, running in the same network namespace?
afaik, no
This could be a problem with the RFC that I sent, because we allow only the processes in the same netns of the VMM, to communicate with the guest.
Do you think could be an easy change in the Kata runtime? I need to look better, but if you already know the answer you'll save me some time :) Hi Stefano,
Hi Tao,
While I understand the motivation of the change, do users have an option to opt out of the namespaced vsock communication? I'm considering a possible scenario that someone uses a single host daemon to manage all the guests like we did in the hyperd project. Then there is no way for such a daemon to communicate with guests with namespaced vsock.
It could be possible, but we would like to avoid it, because if the kernel is compiled with netns support, then we would like to leave it also in vsock. A possible solution whould be to provide a way to define the netns assigned to the device, adding a new ioctl to vhost-vsock device (and a new parameter to the QEMU's vsock device) or extending ip-link(8) to handle vsock devices. Do you think it'll be okay? Thanks, Stefano
On 2019/12/5 17:31, Stefano Garzarella wrote:
On Thu, Dec 05, 2019 at 10:39:08AM +0800, Peng Tao wrote:
On 2019/12/5 01:05, Stefano Garzarella wrote:
On Wed, Dec 04, 2019 at 03:54:06PM +0000, Montes, Julio wrote:
- Host: assign the same CID of VMs running in different network namespaces
this means two VMs running in different namespace can use the same CID?
Exactly.
currently we use VHOST_VSOCK_SET_GUEST_CID to get a unique context ID, is this going to change?
The only change is that the CID is unique in the network namespace domain.
1. Is the VMM (e.g. QEMU) running in a network namespace?
yes, see https://github.com/kata-containers/runtime/blob/62cd08044d78912228d9dc800cb1...
2. Is the host application, that use vsock to communicate with the guest, running in the same network namespace?
afaik, no
This could be a problem with the RFC that I sent, because we allow only the processes in the same netns of the VMM, to communicate with the guest.
Do you think could be an easy change in the Kata runtime? I need to look better, but if you already know the answer you'll save me some time :) Hi Stefano,
Hi Tao,
While I understand the motivation of the change, do users have an option to opt out of the namespaced vsock communication? I'm considering a possible scenario that someone uses a single host daemon to manage all the guests like we did in the hyperd project. Then there is no way for such a daemon to communicate with guests with namespaced vsock.
It could be possible, but we would like to avoid it, because if the kernel is compiled with netns support, then we would like to leave it also in vsock.
A possible solution whould be to provide a way to define the netns assigned to the device, adding a new ioctl to vhost-vsock device (and a new parameter to the QEMU's vsock device) or extending ip-link(8) to handle vsock devices.
Sorry, I don't quit understand your solution. Could you elaborate a bit more? In my scenario, there is one management daemon on the host that needs to talk to many guests via vsock. Assuming the daemon lives in the host init netns, and each guest vmm is put in a different netns, how do you propose to solve the problem? Thanks, Tao -- Into something rich and strange.
On Thu, Dec 05, 2019 at 05:49:08PM +0800, Peng Tao wrote:
On 2019/12/5 17:31, Stefano Garzarella wrote:
On Thu, Dec 05, 2019 at 10:39:08AM +0800, Peng Tao wrote:
Hi Stefano,
Hi Tao,
While I understand the motivation of the change, do users have an option to opt out of the namespaced vsock communication? I'm considering a possible scenario that someone uses a single host daemon to manage all the guests like we did in the hyperd project. Then there is no way for such a daemon to communicate with guests with namespaced vsock.
It could be possible, but we would like to avoid it, because if the kernel is compiled with netns support, then we would like to leave it also in vsock.
A possible solution whould be to provide a way to define the netns assigned to the device, adding a new ioctl to vhost-vsock device (and a new parameter to the QEMU's vsock device) or extending ip-link(8) to handle vsock devices.
Sorry, I don't quit understand your solution. Could you elaborate a bit more?
Sure, sorry for that.
In my scenario, there is one management daemon on the host that needs to talk to many guests via vsock. Assuming the daemon lives in the host init netns, and each guest vmm is put in a different netns, how do you propose to solve the problem?
1. If we add an ioctl to vhost-vsock, the VMM can specify the ID of netns where to assign the device (this requires to assign an id to the netns): $ ip netns add ns1 $ ip netns add ns2 $ ip netns set ns2 2 # qemu runs in ns1 but vsock device is assigned to ns2 (e.g. the # management daemon is running in ns2) $ ip netns exec ns1 qemu-system-x86_64 ... \ -device vhost-vsock-device,guest-cid=3,netns-id=2 # qemu runs in ns1 but vsock device is assigned to init_ns (e.g. -1) $ ip netns exec ns1 qemu-system-x86_64 ... \ -device vhost-vsock-device,guest-cid=3,netns-id=-1 2. If we extend the ip-link(8) we could do as for veth devices: $ ip link list (modified to show vsock${guest_cid}, so in this case there are two guests with CID 42 and 54) ... 10: vsock42: ... 11: vsock54: ... # assign to ns2 $ ip link set vsock42 netns ns2 $ ip link set vsock54 netns ns2 # assign to init_ns (netns accepts also pid, so we can use 1 for init_ns) $ ip link set vsock42 netns 1 $ ip link set vsock54 netns 1 The second option could be more complicated to do. Cheers, Stefano
On 2019/12/5 18:15, Stefano Garzarella wrote:
On Thu, Dec 05, 2019 at 05:49:08PM +0800, Peng Tao wrote:
On 2019/12/5 17:31, Stefano Garzarella wrote:
On Thu, Dec 05, 2019 at 10:39:08AM +0800, Peng Tao wrote:
Hi Stefano,
Hi Tao,
While I understand the motivation of the change, do users have an option to opt out of the namespaced vsock communication? I'm considering a possible scenario that someone uses a single host daemon to manage all the guests like we did in the hyperd project. Then there is no way for such a daemon to communicate with guests with namespaced vsock.
It could be possible, but we would like to avoid it, because if the kernel is compiled with netns support, then we would like to leave it also in vsock.
A possible solution whould be to provide a way to define the netns assigned to the device, adding a new ioctl to vhost-vsock device (and a new parameter to the QEMU's vsock device) or extending ip-link(8) to handle vsock devices.
Sorry, I don't quit understand your solution. Could you elaborate a bit more?
Sure, sorry for that.
In my scenario, there is one management daemon on the host that needs to talk to many guests via vsock. Assuming the daemon lives in the host init netns, and each guest vmm is put in a different netns, how do you propose to solve the problem?
1. If we add an ioctl to vhost-vsock, the VMM can specify the ID of netns where to assign the device (this requires to assign an id to the netns):
$ ip netns add ns1 $ ip netns add ns2 $ ip netns set ns2 2
# qemu runs in ns1 but vsock device is assigned to ns2 (e.g. the # management daemon is running in ns2) $ ip netns exec ns1 qemu-system-x86_64 ... \ -device vhost-vsock-device,guest-cid=3,netns-id=2
# qemu runs in ns1 but vsock device is assigned to init_ns (e.g. -1) $ ip netns exec ns1 qemu-system-x86_64 ... \ -device vhost-vsock-device,guest-cid=3,netns-id=-1
OK, I see. So this is more like what we have for the netdev, where the device itself can live in a different netns than the one vmm is in. Then we can put all the vsock devices and the host management daemon in the same netns so that they can talk, and each vmm can still live in a its own netns. Sounds good to me;)
2. If we extend the ip-link(8) we could do as for veth devices:
$ ip link list (modified to show vsock${guest_cid}, so in this case there are two guests with CID 42 and 54) ... 10: vsock42: ...
11: vsock54: ...
# assign to ns2 $ ip link set vsock42 netns ns2 $ ip link set vsock54 netns ns2
# assign to init_ns (netns accepts also pid, so we can use 1 for init_ns) $ ip link set vsock42 netns 1 $ ip link set vsock54 netns 1
The second option could be more complicated to do. Yeah, it the same idea in a different form right? Thanks for the explanation! I agree it can solve the single host management daemon issue.
Cheers, Tao -- Into something rich and strange.
On Thu, Dec 05, 2019 at 08:37:19PM +0800, Peng Tao wrote:
On 2019/12/5 18:15, Stefano Garzarella wrote:
On Thu, Dec 05, 2019 at 05:49:08PM +0800, Peng Tao wrote:
On 2019/12/5 17:31, Stefano Garzarella wrote:
On Thu, Dec 05, 2019 at 10:39:08AM +0800, Peng Tao wrote:
Hi Stefano,
Hi Tao,
While I understand the motivation of the change, do users have an option to opt out of the namespaced vsock communication? I'm considering a possible scenario that someone uses a single host daemon to manage all the guests like we did in the hyperd project. Then there is no way for such a daemon to communicate with guests with namespaced vsock.
It could be possible, but we would like to avoid it, because if the kernel is compiled with netns support, then we would like to leave it also in vsock.
A possible solution whould be to provide a way to define the netns assigned to the device, adding a new ioctl to vhost-vsock device (and a new parameter to the QEMU's vsock device) or extending ip-link(8) to handle vsock devices.
Sorry, I don't quit understand your solution. Could you elaborate a bit more?
Sure, sorry for that.
In my scenario, there is one management daemon on the host that needs to talk to many guests via vsock. Assuming the daemon lives in the host init netns, and each guest vmm is put in a different netns, how do you propose to solve the problem?
1. If we add an ioctl to vhost-vsock, the VMM can specify the ID of netns where to assign the device (this requires to assign an id to the netns):
$ ip netns add ns1 $ ip netns add ns2 $ ip netns set ns2 2
# qemu runs in ns1 but vsock device is assigned to ns2 (e.g. the # management daemon is running in ns2) $ ip netns exec ns1 qemu-system-x86_64 ... \ -device vhost-vsock-device,guest-cid=3,netns-id=2
# qemu runs in ns1 but vsock device is assigned to init_ns (e.g. -1) $ ip netns exec ns1 qemu-system-x86_64 ... \ -device vhost-vsock-device,guest-cid=3,netns-id=-1
OK, I see. So this is more like what we have for the netdev, where the device itself can live in a different netns than the one vmm is in. Then we can put all the vsock devices and the host management daemon in the same netns so that they can talk, and each vmm can still live in a its own netns.
Right!
Sounds good to me;)
:-)
2. If we extend the ip-link(8) we could do as for veth devices:
$ ip link list (modified to show vsock${guest_cid}, so in this case there are two guests with CID 42 and 54) ... 10: vsock42: ...
11: vsock54: ...
# assign to ns2 $ ip link set vsock42 netns ns2 $ ip link set vsock54 netns ns2
# assign to init_ns (netns accepts also pid, so we can use 1 for init_ns) $ ip link set vsock42 netns 1 $ ip link set vsock54 netns 1
The second option could be more complicated to do. Yeah, it the same idea in a different form right?
Exactly!
Thanks for the explanation! I agree it can solve the single host management daemon issue.
Thanks for your feedback! Cheers, Stefano
participants (5)
-
Boeuf, Sebastien
-
Hunt, James O
-
Montes, Julio
-
Peng Tao
-
Stefano Garzarella