We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
9成新模型,都无法使用pnnx正常导出
f5-tts
% pnnx s.pt inputshape="[1,1226,1000],[2,16,1226,64],[2,16,1226,64],[1,1226,612],[1,1226,612],[1]i32" pnnxparam = s.pnnx.param pnnxbin = s.pnnx.bin pnnxpy = s_pnnx.py pnnxonnx = s.pnnx.onnx ncnnparam = s.ncnn.param ncnnbin = s.ncnn.bin ncnnpy = s_ncnn.py fp16 = 1 optlevel = 2 device = cpu inputshape = [1,1226,1000]f32,[2,16,1226,64]f32,[2,16,1226,64]f32,[1,1226,612]f32,[1,1226,612]f32,[1]i32 inputshape2 = customop = moduleop = ############# pass_level0 inline module = f5_tts.model.backbones.dit.DiT inline module = f5_tts.model.backbones.dit.InputEmbedding inline module = f5_tts.model.modules.AdaLayerNorm inline module = f5_tts.model.modules.AdaLayerNorm_Final inline module = f5_tts.model.modules.Attention inline module = f5_tts.model.modules.ConvPositionEmbedding inline module = f5_tts.model.modules.DiTBlock inline module = f5_tts.model.modules.FeedForward inline module = f5_tts.model.backbones.dit.DiT inline module = f5_tts.model.backbones.dit.InputEmbedding inline module = f5_tts.model.modules.AdaLayerNorm inline module = f5_tts.model.modules.AdaLayerNorm_Final inline module = f5_tts.model.modules.Attention inline module = f5_tts.model.modules.ConvPositionEmbedding inline module = f5_tts.model.modules.DiTBlock inline module = f5_tts.model.modules.FeedForward ---------------- libc++abi: terminating due to uncaught exception of type std::runtime_error: The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/__torch__/f5_tts/model/backbones/dit.py", line 115, in forward proj = self.proj input = torch.cat([noise, cat_mel_text], -1) _25 = (proj).forward(input, ) ~~~~~~~~~~~~~ <--- HERE _26 = torch.add((conv_pos_embed).forward(_25, ), _25) return _26 File "code/__torch__/torch/nn/modules/linear/___torch_mangle_25.py", line 12, in forward bias = self.bias weight = self.weight return torch.linear(input, weight, bias) ~~~~~~~~~~~~ <--- HERE def forward1(self: __torch__.torch.nn.modules.linear.___torch_mangle_25.Linear, input: Tensor) -> Tensor: Traceback of TorchScript, original code (most recent call last): /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/nn/modules/linear.py(117): forward /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/nn/modules/module.py(1543): _slow_forward /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/nn/modules/module.py(1562): _call_impl /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl /Users/baiyue/work/F5-TTS-ONNX/F5_TTS/src/f5_tts/model/backbones/dit.py(86): forward /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/nn/modules/module.py(1543): _slow_forward /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/nn/modules/module.py(1562): _call_impl /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl /Users/baiyue/work/F5-TTS-ONNX/F5_TTS/src/f5_tts/model/backbones/dit.py(185): forward /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/nn/modules/module.py(1543): _slow_forward /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/nn/modules/module.py(1562): _call_impl /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl /Users/baiyue/work/F5-TTS-ONNX/Export_ONNX/F5_TTS/Export_F5.py(318): forward /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/nn/modules/module.py(1543): _slow_forward /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/nn/modules/module.py(1562): _call_impl /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/jit/_trace.py(1275): trace_module /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/jit/_trace.py(695): _trace_impl /Users/baiyue/Library/Python/3.10/lib/python/site-packages/torch/jit/_trace.py(1000): trace /Users/baiyue/work/F5-TTS-ONNX/Export_ONNX/F5_TTS/Export_F5.py(556): <module> RuntimeError: mat1 and mat2 shapes cannot be multiplied (1226x1612 and 712x1024) zsh: abort pnnx s.pt
The text was updated successfully, but these errors were encountered:
No branches or pull requests
日志
9成新模型,都无法使用pnnx正常导出
模型
f5-tts
复现步骤
The text was updated successfully, but these errors were encountered: