Docker has little overhead and wouldn’t this require running the entire kernelmultiple times, take up more RAM?
Also dynamically allocating the RAM seems more efficient than having to assign each kernel a portion at boot.
If we’re going to this amount of trouble, wouldn’t it be better to replace the monolithic kernel with a microkernel and servers that provide the same APIs for Linux apps? Maybe even seL4 which has its behaviour formally verified. That way the microkernel can spin up arbitrary instances of whatever services are needed most.
I always thought that Minix was a superior architecture to be honest.
How is this better than a hypervisor OS running multiple VM’s?
I imagine there’s some overhead savings but I don’t know what. I guess with classic hypervisor there’s still calls going through the host kerbel whereas with this they’d go straight to the hardware without special passthrough features?
There is no hypervisor. So, no hypervisor to update and manage.
I recently heard this great phrase:
“A VM makes an OS believe that it has the machine to itself; a container makes a process believe that it has the OS to itself.”
This would be somewhere between that, where each container could believe it has the OS to itself, but with different kernels.







