Skip to content

Issues: vllm-project/production-stack

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

bug: dynamically serving LoRA Adapters is not working bug Something isn't working
#331 opened Mar 28, 2025 by robert-moyai
Helm template improvements feature request New feature or request
#307 opened Mar 19, 2025 by chosey85
Question-Is there any solution to load a local model? question Further information is requested
#304 opened Mar 18, 2025 by Cangxihui
[Roadmap] vLLM Production Stack roadmap for 2025 Q2
#300 opened Mar 17, 2025 by YuhanLiu11
2 of 27 tasks
Question about tolerations in servingEngineSpec question Further information is requested
#294 opened Mar 17, 2025 by hongkunyoo
feature: Terraform tutorial for MS Azure feature request New feature or request
#271 opened Mar 12, 2025 by falconlee236
Question - pinning GPUs question Further information is requested
#270 opened Mar 12, 2025 by chosey85
feature: Support LoRA loading for model deployments feature request New feature or request
#205 opened Mar 1, 2025 by ApostaC
feature: Support CRD based configuration feature request New feature or request
#204 opened Mar 1, 2025 by rootfs
bug: File Access Error with vllm using runai_streamer on OCP bug Something isn't working
#193 opened Feb 27, 2025 by TamKez
ProTip! Exclude everything labeled bug with -label:bug.