-
Notifications
You must be signed in to change notification settings - Fork 43
ERROR:root:Can not load PyTorch model. Please make surethat model saved like torch.save(model, PATH)
#8
Comments
I got exactly the same problem! Can't load pytorch model |
@IvanStoykov To understand what causes the problem you can log the exception on 72 line in convertert.py, simply write logging.error(str(e)) and run the script again. The problem may be in the wrong path to your model and not in pytorch itself. Anyway check your saving script, be sure that you save the entire model, not only the model parameters. |
I also faced the same problem |
while I am running this command: |
I have the same error, the problem occurs here For me, |
@ralienpp did you solve it ? I have the same issue |
I don't remember, unfortunately. Most likely it is because I abandoned the prototype I was working on at that time. |
I'm trying to convert a YOLOv5 best.pt weights file to a .tflite file so we can deploy the model on a flutter app.
This is the code:
`import torch
weights_path = '/content/drive/MyDrive/Weights/best.pt'
yolo_path = '/content/yolov5'
model = torch.hub.load(yolo_path, 'custom', weights_path, source='local') # local repo
torch.save(model, '/content/pipeline.pt' )`
You can also go straight to the colab for all the colde.
https://colab.research.google.com/drive/19JeOkrxlGP6KtkfGbrkO87COc4pvSZSE?usp=sharing
I assume I'm not saving the model how the converter wants it, but I can't figure it out.
Can you please explain exactly how the file needs to be saved so the code will work ?
Thanks a lot in advance!
The text was updated successfully, but these errors were encountered: