You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@d-li14 hi, thanks for your contributions and for this amazing idea!
I'd like to try your involution() module in a non-mmdetection repo (YOLOv5), and was trying to figure out the best technical way to do this using your existing code here:
The naive implementation seems easier to integrate into new works, so I'd like to use that, and my main question is:
How much of a speed change do you see in training (and inference) when moving from naive to cuda? Thanks!
The text was updated successfully, but these errors were encountered:
Thanks for your feedback!
We have not tried involution with the YOLO framework. Moreover, the practical change may depend on specific platforms and test settings. For reference, we consider another one-stage detector RetinaNet in our work. The inference speedup on a single NVIDIA V100 GPU is roughly 40%.
Another major drawback of the naive implementation is that it costs much GPU memory due to the harmful unfold operation.
@d-li14 hi, thanks for your contributions and for this amazing idea!
I'd like to try your involution() module in a non-mmdetection repo (YOLOv5), and was trying to figure out the best technical way to do this using your existing code here:
The naive implementation seems easier to integrate into new works, so I'd like to use that, and my main question is:
How much of a speed change do you see in training (and inference) when moving from naive to cuda? Thanks!
The text was updated successfully, but these errors were encountered: