You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As recommended by Välimäki et al., a pre-emphasis filter could be applied
before applying ESR loss.
A auraloss.perceptual.FIRFilter instance, however, cannot be successfully
called when the PyTorch device is CPU.
Interestingly, the instance can be called on a Nvidia CUDA device
without any runtime error.
Expected Behavior
auraloss.perceptual.FIRFilter instance can be successfully
called regardless of any device.
Current Behavior
When calling an auraloss.perceptual.FIRFilter instance on CPU,
a runtime error would be raised.
---------------------------------------------------------------------------RuntimeErrorTraceback (mostrecentcalllast)
CellIn[4], line3733losses[k].append(v)
35pd.DataFrame(losses).to_csv(job_eval_dir [/](https://file+.vscode-resource.vscode-cdn.net/) f'loss.csv')
--->37test()
File [~/.local/share/virtualenvs/s4-dynamic-range-compressor-WjUGfTKg/lib/python3.10/site-packages/torch/utils/_contextlib.py:115](https://file+.vscode-resource.vscode-cdn.net/Users/int0thewind/Developer/s4-dynamic-range-compressor/~/.local/share/virtualenvs/s4-dynamic-range-compressor-WjUGfTKg/lib/python3.10/site-packages/torch/utils/_contextlib.py:115), incontext_decorator..decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113defdecorate_context(*args, **kwargs):
114withctx_factory():
-->115returnfunc(*args, **kwargs)
CellIn[4], line26, intest()
23y_hat: Tensor=model(x, parameters)
25forvalidation_loss, validation_criterioninvalidation_criterions.items():
--->26loss: Tensor=validation_criterion(y_hat.unsqueeze(1), y.unsqueeze(1))
27validation_losses[validation_loss] +=loss.item()
29fork, vinlist(validation_losses.items()):
File [~/.local/share/virtualenvs/s4-dynamic-range-compressor-WjUGfTKg/lib/python3.10/site-packages/torch/nn/modules/module.py:1501](https://file+.vscode-resource.vscode-cdn.net/Users/int0thewind/Developer/s4-dynamic-range-compressor/~/.local/share/virtualenvs/s4-dynamic-range-compressor-WjUGfTKg/lib/python3.10/site-packages/torch/nn/modules/module.py:1501), inModule._call_impl(self, *args, **kwargs)
1496# If we don't have any hooks, we want to skip the rest of the logic in1497# this function, and just call forward.1498ifnot (self._backward_hooksorself._backward_pre_hooksorself._forward_hooksorself._forward_pre_hooks1499or_global_backward_pre_hooksor_global_backward_hooks1500or_global_forward_hooksor_global_forward_pre_hooks):
->1501returnforward_call(*args, **kwargs)
1502# Do not call functions when jit is used1503full_backward_hooks, non_full_backward_hooks= [], []
File [~/Developer/s4-dynamic-range-compressor/src/loss.py:42](https://file+.vscode-resource.vscode-cdn.net/Users/int0thewind/Developer/s4-dynamic-range-compressor/~/Developer/s4-dynamic-range-compressor/src/loss.py:42), inPreEmphasisESRLoss.forward(self, y_hat, y)
40defforward(self, y_hat: Tensor, y: Tensor) ->Tensor:
41ifself.pre_emphasis_filter:
--->42y_hat, y=self.pre_emphasis_filter(y_hat, y)
43returnself.esr(y_hat, y)
File [~/.local/share/virtualenvs/s4-dynamic-range-compressor-WjUGfTKg/lib/python3.10/site-packages/torch/nn/modules/module.py:1501](https://file+.vscode-resource.vscode-cdn.net/Users/int0thewind/Developer/s4-dynamic-range-compressor/~/.local/share/virtualenvs/s4-dynamic-range-compressor-WjUGfTKg/lib/python3.10/site-packages/torch/nn/modules/module.py:1501), inModule._call_impl(self, *args, **kwargs)
1496# If we don't have any hooks, we want to skip the rest of the logic in1497# this function, and just call forward.1498ifnot (self._backward_hooksorself._backward_pre_hooksorself._forward_hooksorself._forward_pre_hooks1499or_global_backward_pre_hooksor_global_backward_hooks1500or_global_forward_hooksor_global_forward_pre_hooks):
->1501returnforward_call(*args, **kwargs)
1502# Do not call functions when jit is used1503full_backward_hooks, non_full_backward_hooks= [], []
File [~/.local/share/virtualenvs/s4-dynamic-range-compressor-WjUGfTKg/lib/python3.10/site-packages/auraloss/perceptual.py:125](https://file+.vscode-resource.vscode-cdn.net/Users/int0thewind/Developer/s4-dynamic-range-compressor/~/.local/share/virtualenvs/s4-dynamic-range-compressor-WjUGfTKg/lib/python3.10/site-packages/auraloss/perceptual.py:125), inFIRFilter.forward(self, input, target)
117defforward(self, input, target):
118"""Calculate forward propagation. 119 Args: 120 input (Tensor): Predicted signal (B, #channels, #samples). (...) 123 Tensor: Filtered signal. 124 """-->125input=torch.nn.functional.conv1d(
126input, self.fir.weight.data, padding=self.ntaps [/](https://file+.vscode-resource.vscode-cdn.net/)[/](https://file+.vscode-resource.vscode-cdn.net/) 2127 )
128target=torch.nn.functional.conv1d(
129target, self.fir.weight.data, padding=self.ntaps [/](https://file+.vscode-resource.vscode-cdn.net/)[/](https://file+.vscode-resource.vscode-cdn.net/) 2130 )
131returninput, targetRuntimeError: NNPACKSpatialConvolution_updateOutputfailed
Steps to Reproduce
Create an auraloss.perceptual.FIRFilter instance.
Create two three-dimensional (batch size, audio_channel, sample length) float32 PyTorch Tensors with the same shape
Convert all PyTorch Tensors and the FIRFilter instance to CPU device.
Call the FIRFilter instance with these two tensors as parameters.
Hi, thanks for raising this issue. I did some testing and found something. The following example runs without error on my M1 Mac, but only when the batch size is less than 16. When I set bs=16, I get the same error as you reported. This does not appear to be a problem with auraloss, but instead a problem with the torch backend for CPU, specifically the convolution operation in NNPACK. For now, if you are using auraloss for evaluation on CPU, I would suggest using a smaller batch size to fix the issue. Let me know if that works.
Thanks, Christian! Yes, the error would occur if the batch size is bigger or equal than 16 —interesting bug from PyTorch. Maybe I should raise it to PyTorch in the future.
Fix: the input tensor shall be three-dimensional instead of two. I fixed that in my initial post.
Hi!
As recommended by Välimäki et al., a pre-emphasis filter could be applied
before applying ESR loss.
A
auraloss.perceptual.FIRFilter
instance, however, cannot be successfullycalled when the PyTorch device is CPU.
Interestingly, the instance can be called on a Nvidia CUDA device
without any runtime error.
Expected Behavior
auraloss.perceptual.FIRFilter
instance can be successfullycalled regardless of any device.
Current Behavior
When calling an
auraloss.perceptual.FIRFilter
instance on CPU,a runtime error would be raised.
Steps to Reproduce
auraloss.perceptual.FIRFilter
instance.FIRFilter
instance to CPU device.FIRFilter
instance with these two tensors as parameters.Context (Environment)
CPU: Apple M1 Max
The text was updated successfully, but these errors were encountered: