Hi folks, On Oct 17th Wed, we held a Kata deep dive meetup in Beijing. Most of the major cloud providers, and ICT equipment vendors in China attended the meetup and exchange their thoughts, practice, and use cases. [The participants](https://etherpad.openstack.org/p/kata-meetup-beijing-2018) included Baidu, Alibaba, Kingsoft, Tencent, QingCloud, Suning, IBM, UnionPay, Netease, China Unicom, ZTE, Unicloud, AntFin, Lenovo, Spirent, etc. The one-day meetup was supported by Intel (Sponsor), Huawei (Sponsor), Hyper.sh, and OpenStack Foundation. I'd like to thank Maggie Liang from Intel especially, who effectively led the organizing work of the meetup. The executive director of OpenStack Foundation, Jonathan Bryce, who was visiting China in the week, gave a short opening speech for the meetup. And before the sessions, Yuntong Jin (Intel) and I (Xu Wang from Hyper.sh) gave an introduction of the meetup and status update of the development in Kata Containers community. In the morning, developers from Baidu, Huawei, and ZTE introduced their practice on Edge, Public Cloud, and NFV cases. And Intel Developers gave an introduction of NEMU VMM in the early afternoon. After all the sessions, the attendees were grouped by topics and had a two-hours open discussion. The following topics are addressed: - In-sandbox networking policies: - Case: - comes from Baidu; - not a kubernetes scenario; - configure a network interface for the sandbox and connect to the management network; - group the processes in the sandbox to two groups based on PID - the processes from the provider could access local (mgmt) network, while the other processes that running guest binaries don't have the permission. - Comments from Xu: we could generalize the requirement to apply different networking rules for different processes in the sandbox; - VLAN networking mechanism support and hotplug - Case: - An NFV scenario; - Configure different VLAN for different tenants instead of overlays - Comments from Xu: - The bridge + veth pair method might work for VLAN; - The user could write their own CNI plugin for accelerating; - There is ongoing work in the upstream community to help general CNI plugins to support Kata, however, it might need to modify the current CNI interface and existing plug-ins; - Networking performance metering and tuning: - Many participants care about networking performance; - It's appreciated that all the users could provide their real-life cases, which could help the community to find and fix the issues better. And there are many other topics are discussed as well: - 9PFS performance and alternatives; - Block device based rootfs support; - Memory footprint optimization; - Streaming media offload support; - Vsock support; - Container image management issues for big container images (such as some tensorflow images are 8GB+). Some of the feature requests or issue will be raised in Github. Cheers, -- Xu Wang CTO & Cofounder, Hyper github/twitter/wechat: @gnawux Hyper_: Make VM run like container |