Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ConvTranspose does not work with BF16 #565

Open
pranavm-nvidia opened this issue Mar 11, 2025 · 0 comments
Open

ConvTranspose does not work with BF16 #565

pranavm-nvidia opened this issue Mar 11, 2025 · 0 comments

Comments

@pranavm-nvidia
Copy link
Collaborator

If we attempt to use BF16 in ConvTranspose (which requires a change in MLIR-TRT and Tripy to enable), we get an error:

MTRTException: failed to run pass pipeline
    Error Code: 9: Skipping tactic 0x0000000000000000 due to exception [type.cpp:infer_type:145] Could not infer output types for operation:   746: deconv: result0'_before_bias- | [tensorrt_slice] () _output'-(bf16[1,3,5,5][]so[], mem_prop=0), [tensorrt_constant] () _0_constantBFloat16-{1, 1, 1, 1, 1, 1, 1, 1, ...}(bf16[3,1,3,3][9,9,3,1]so[3,2,1,0], mem_prop=0)<entry>, __mye747[tensorrt_deconvolution] () _alpha-1F:(f32[][]so[], mem_prop=0)<entry>, __mye748[tensorrt_deconvolution] () _beta-0F:(f32[][]so[], mem_prop=0)<entry>, stream = 0 // [tensorrt_deconvolution] () 
             | n_groups: 3  lpad: {0, 0}  rpad: {0, 0}  pad_mode: 0 strides: {1, 1}  dilations: {1, 1} , No matching rules found for input operand types
    IBuilder::buildSerializedNetwork: Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[[tensorrt.constant] () _0...[tensorrt.deconvolution] () ]}.)
    () error: failed to translate function 'tensorrt_cluster' to a TensorRT engine
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant