Hi please solve this problem

code": “APP_INFERENCE_ERROR”,
“name”: “OutOfMemoryError”,
“message”: “CUDA out of memory. Tried to allocate 50.91 GiB (GPU 0; 23.69 GiB total capacity; 16.33 GiB already allocated; 332.81 MiB free; 22.96 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF”,
“stack”: “Traceback (most recent call last):\n File "/api/server.py", line 53, in inference\n output = await user_src.inference(all_inputs, streaming_response)\n File "/api/app.py", line 232, in inference\n return await extra(\n File "/api/extras/upsample/upsample.py", line 207, in upsample\n output, _rgb = upsampler.enhance(img, outscale=4) # TODO outscale param\n File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context\n return func(*args, **kwargs)\n File "/api/Real-ESRGAN/realesrgan/utils.py", line 223, in enhance\n self.process()\n File "/api/Real-ESRGAN/realesrgan/utils.py", line 115, in process\n self.output = self.model(self.img)\n File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl\n return forward_call(*args, **kwargs)\n File "/opt/conda/lib/python3.10/site-packages/basicsr/archs/rrdbnet_arch.py", line 117, in forward\n feat = self.lrelu(self.conv_up2(F.interpolate(feat, scale_factor=2, mode=‘nearest’)))\n File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl\n return forward_call(*args, **kwargs)\n File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward\n return self._conv_forward(input, self.weight, self.bias)\n File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward\n return F.conv2d(input, weight, bias, self.stride,\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.91 GiB (GPU 0; 23.69 GiB total capacity; 16.33 GiB already allocated; 332.81 MiB free; 22.96 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\n”
}

Traceback (most recent call last):
File “/api/server.py”, line 53, in inference
output = await user_src.inference(all_inputs, streaming_response)
File “/api/app.py”, line 232, in inference
return await extra(
File “/api/extras/upsample/upsample.py”, line 207, in upsample
output, _rgb = upsampler.enhance(img, outscale=4) # TODO outscale param
File “/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py”, line 115, in decorate_context
return func(*args, **kwargs)
File “/api/Real-ESRGAN/realesrgan/utils.py”, line 223, in enhance
self.process()
File “/api/Real-ESRGAN/realesrgan/utils.py”, line 115, in process
self.output = self.model(self.img)
File “/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1501, in _call_impl
return forward_call(*args, **kwargs)
File “/opt/conda/lib/python3.10/site-packages/basicsr/archs/rrdbnet_arch.py”, line 117, in forward
feat = self.lrelu(self.conv_up2(F.interpolate(feat, scale_factor=2, mode=‘nearest’)))
File “/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1501, in _call_impl
return forward_call(*args, **kwargs)
File “/opt/conda/lib/python3.10/site-packages/torch/nn/modules/conv.py”, line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File “/opt/conda/lib/python3.10/site-packages/torch/nn/modules/conv.py”, line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.91 GiB (GPU 0; 23.69 GiB total capacity; 16.33 GiB already allocated; 332.81 MiB free; 22.96 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Hey! Thanks for the report.

Can you give a bit more info?

Is this on kiri.art or your own computer?

It looks like it’s happening on upsampling, right? On all images or just some images? What is the resolution of the image you’re sending?

If it’s only on big images, I’ll see over the next few days if I can squeeze out a bit more memory… but the upsampling is really meant to take small, low detailed images and turn them into photo quality, not taking existing high resolution photos and making them into super super super high resolution. Of course, we should definitely explain that better and not just throw an error!

So thanks, your feedback and answers to the questions above will help us make this better :pray: