VLLm

Aus XccesS Wiki
Zur Navigation springen Zur Suche springen

Beschreibung

Docker normal installieren

Download

Normal (ROCm)

docker pull rocm/vllm-dev:nightly

gfx906

docker pull nalanzeyu/vllm-gfx906

Ausführen

Variante 1:

docker run -it --rm --shm-size=8g --device=/dev/kfd --device=/dev/dri \
    --group-add video -p 8086:8000 \
    -v /mnt/share/models:/models \
    nalanzeyu/vllm-gfx906 \
    vllm serve /models/Qwen3-Coder-30B-A3B-Instruct-AWQ-4bit --served-model-name Homelab --max-model-len 30000 --enable-auto-tool-choice --tool-call-parser hermes

Variante 2, getestet 18.12.2025:

sudo docker run -it --rm --network=host \
--group-add=video --ipc=host --cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined --device /dev/kfd \
--device /dev/dri \
-v /home/hendrik/.lmstudio/models/:/app/models \
-e HF_HOME="/app/models" \
-e HF_TOKEN="<TOKEN>" \
-e NCCL_P2P_DISABLE=1 \
-e VLLM_CUSTOM_OPS=all \
-e VLLM_ROCM_USE_AITER=0 \
-e SAFETENSORS_FAST_GPU=1 \
-e PYTORCH_TUNABLEOP_ENABLED=1
rocm/vllm-dev:nightly

Ohne Tensor Parallism:

vllm serve Qwen/Qwen3-VL-8B-Thinking --max_model_len 4096 --enable-auto-tool-choice --tool-call-parser hermes --reasoning-parser qwen3

Mit:

vllm serve Qwen/Qwen3-VL-8B-Thinking --tp 2 --max_model_len 4096 --enable-auto-tool-choice --tool-call-parser hermes --reasoning-parser qwen3

Benchmark:

Test

Bekannte Probleme

Nützliche Links