-
-
Notifications
You must be signed in to change notification settings - Fork 6.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Usage]: add mulitple lora in docker #7286
Comments
I think the lora path should be the path inside the container, e.g. |
Hi @youkaichao Something like that |
you can ask chatgpt for more details. |
0.5.4 docker api_server.py: error: unrecognized arguments: --lora-modules test-lora=/vllm-workspace/xxxx/ |
Hi @Cloopen-ReLiNK
|
Is it just me or does vLLM docker documentation is not extensive? Like I couldnt find what other command line arguments you can use. |
Closing as this issue appears to be resolved. |
Your current environment
How would you like to use vllm
Hi
I want to attach lora using docker command
docker run --runtime nvidia --gpus all
-v ~/.cache/huggingface:/root/.cache/huggingface
-v /datadrive/finetune_model/infosys:/app/lora/xyz
-v /datadrive/finetune_model/dummy:/app/lora/abc
-p 8000:8000
--env "HUGGING_FACE_HUB_TOKEN="
vllm/vllm-openai --enable-lora
--model meta-llama/Meta-Llama-3-8B-Instruct
--lora-modules xyz-lora=/datadrive/finetune_model/xyz
--lora-modules abc-lora=/datadrive/finetune_model/abc
The text was updated successfully, but these errors were encountered: