Cannot re-initialize CUDA in forked subprocess

RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the ‘spawn’ start method

Oh no! What can I do?

If you’re not doing anything special, this can be an unintended result of certain package combinations. Try lock down your packages in requirements.txt to this:

diffusers==0.7.0 # if you're using it. any version ok.

(since there are only a few packages here, it would be great if someone could figure out exactly which package is the problem and report back!).

Otherwise if you’re intentionally trying to run more than one worker, that’s not supported. Here’s the technical explanation from erik[1]:

to elaborate, that exact error happens in pytorch when running a multiworker http server, so I suspect it’s that. Running more than one worker on Banana doesn’t change anything since the routing layer only assumes single worker. Plus, that multiworker error appears to be unfixable. Interprocess GPU memory sharing is an extremely difficult problem that we’ve put months of R&D into, and hope to crack but for now, ML servers can only run single worker. Hope this context helps!

  1. Discord ↩︎