• geneva_convenience@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    6 months ago

    Docker has little overhead and wouldn’t this require running the entire kernelmultiple times, take up more RAM?

    Also dynamically allocating the RAM seems more efficient than having to assign each kernel a portion at boot.

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    6 months ago

    If we’re going to this amount of trouble, wouldn’t it be better to replace the monolithic kernel with a microkernel and servers that provide the same APIs for Linux apps? Maybe even seL4 which has its behaviour formally verified. That way the microkernel can spin up arbitrary instances of whatever services are needed most.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      6 months ago

      I imagine there’s some overhead savings but I don’t know what. I guess with classic hypervisor there’s still calls going through the host kerbel whereas with this they’d go straight to the hardware without special passthrough features?

    • friend_of_satan@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      I recently heard this great phrase:

      “A VM makes an OS believe that it has the machine to itself; a container makes a process believe that it has the OS to itself.”

      This would be somewhere between that, where each container could believe it has the OS to itself, but with different kernels.