#14663Eval bug: Qwen 2.5 VL gets stuck in a loop
Issue Details
Name and Version
> ./llama-cli --version ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA H100 80GB HBM3, compute capability 9.0, VMM: yes load_backend: loaded CUDA backend from /app/libggml-cuda.so load_backend: loaded CPU backend from /app/libggml-cpu-icelake.so version: 5884 (c31e6064) built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Using ghcr.io/ggml-org/llama.cpp:full-cuda Docker image with Apptainer/Singularity.
Operating systems
Linux
GGML backends
CUDA
Hardware
NVIDIA H100 80GB HBM3
Driver Version: 575.57.08 CUDA Version: 12.9
Models
ggml-org/Qwen2.5-VL-32B-Instruct-GGUF
unsloth/Qwen2.5-VL-32B-Instruct-GGUF
Problem description & steps to reproduce
Using Qwen 2.5 VL for OCR to extract text from scanned document often causes the model to get stuck in a loop, repeating the same few words forever.
From my testing, it happens on both ggml-org/Qwen2.5-VL-32B-Instruct-GGUF and unsloth/Qwen2.5-VL-32B-Instruct-GGUF versions of the model. It seems to happen mainly on Q8 and (B)F16 quantizations. When the temperatue is default, it happens only sometimes, but when the temperature is set to 0, it happens every time. I think it also happens with different parameter counts.
I previously also tried this with Ollama, where it happens every time when using non-Q4 models or with flash attention enabled (ollama/ollama#11230). However, when using Hugging Face demos of Qwen 2.5 VL or AWQ quantization using vLLM, it doesn't happen, regardless of the temperature, so it seems to be a problem with GGUF versions of the model.
The image I'm using: https://github.com/user-attachments/assets/d392c3d9-b8fb-4d14-8974-15d843f937bb
First Bad Commit
No response
Relevant log output
> ./llama-mtmd-cli -hf unsloth/Qwen2.5-VL-32B-Instruct-GGUF:BF16 -ngl 100 --temp 0.0 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA H100 80GB HBM3, compute capability 9.0, VMM: yes load_backend: loaded CUDA backend from /app/libggml-cuda.so load_backend: loaded CPU backend from /app/libggml-cpu-icelake.so curl_perform_with_retry: HEAD https://huggingface.co/unsloth/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/BF16/Qwen2.5-VL-32B-Instruct-BF16-00001-of-00002.gguf (attempt 1 of 1)... common_download_file_single: using cached file: /d/hpc/home/fs90700/.cache/llama.cpp/unsloth_Qwen2.5-VL-32B-Instruct-GGUF_BF16_Qwen2.5-VL-32B-Instruct-BF16-00001-of-00002.gguf curl_perform_with_retry: HEAD https://huggingface.co/unsloth/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/BF16/Qwen2.5-VL-32B-Instruct-BF16-00002-of-00002.gguf (attempt 1 of 1)... common_download_file_single: using cached file: /d/hpc/home/fs90700/.cache/llama.cpp/unsloth_Qwen2.5-VL-32B-Instruct-GGUF_BF16_Qwen2.5-VL-32B-Instruct-BF16-00002-of-00002.gguf curl_perform_with_retry: HEAD https://huggingface.co/unsloth/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/mmproj-BF16.gguf (attempt 1 of 1)... common_download_file_single: using cached file: /d/hpc/home/fs90700/.cache/llama.cpp/unsloth_Qwen2.5-VL-32B-Instruct-GGUF_mmproj-BF16.gguf build: 5884 (c31e6064) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu llama_model_load_from_file_impl: using device CUDA0 (NVIDIA H100 80GB HBM3) - 80553 MiB free llama_model_loader: additional 1 GGUFs metadata loaded. llama_model_loader: loaded meta data with 38 key-value pairs and 771 tensors from /d/hpc/home/fs90700/.cache/llama.cpp/unsloth_Qwen2.5-VL-32B-Instruct-GGUF_BF16_Qwen2.5-VL-32B-Instruct-BF16-00001-of-00002.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2vl llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5-Vl-32B-Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5-Vl-32B-Instruct llama_model_loader: - kv 5: general.quantized_by str = Unsloth llama_model_loader: - kv 6: general.size_label str = 32B llama_model_loader: - kv 7: general.license str = apache-2.0 llama_model_loader: - kv 8: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 9: general.base_model.count u32 = 1 llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 VL 32B Instruct llama_model_loader: - kv 11: general.base_model.0.organization str = Qwen llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-V... llama_model_loader: - kv 13: general.tags arr[str,3] = ["multimodal", "unsloth", "image-text... llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 15: qwen2vl.block_count u32 = 64 llama_model_loader: - kv 16: qwen2vl.context_length u32 = 128000 llama_model_loader: - kv 17: qwen2vl.embedding_length u32 = 5120 llama_model_loader: - kv 18: qwen2vl.feed_forward_length u32 = 27648 llama_model_loader: - kv 19: qwen2vl.attention.head_count u32 = 40 llama_model_loader: - kv 20: qwen2vl.attention.head_count_kv u32 = 8 llama_model_loader: - kv 21: qwen2vl.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 22: qwen2vl.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 23: general.file_type u32 = 32 llama_model_loader: - kv 24: qwen2vl.rope.dimension_sections arr[i32,4] = [16, 24, 24, 0] llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 27: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 31: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 32: tokenizer.ggml.padding_token_id u32 = 151654 llama_model_loader: - kv 33: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 34: tokenizer.chat_template str = {% set image_count = namespace(value=... llama_model_loader: - kv 35: split.no u16 = 0 llama_model_loader: - kv 36: split.count u16 = 2 llama_model_loader: - kv 37: split.tensors.count i32 = 771 llama_model_loader: - type f32: 321 tensors llama_model_loader: - type bf16: 450 tensors print_info: file format = GGUF V3 (latest) print_info: file type = BF16 print_info: file size = 61.03 GiB (16.00 BPW) load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2vl print_info: vocab_only = 0 print_info: n_ctx_train = 128000 print_info: n_embd = 5120 print_info: n_layer = 64 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 27648 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = -1 print_info: rope type = 8 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 128000 print_info: rope_finetuned = unknown print_info: model type = 32B print_info: model params = 32.76 B print_info: general.name = Qwen2.5-Vl-32B-Instruct print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 11 ',' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151654 '<|vision_pad|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 64 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 65/65 layers to GPU load_tensors: CUDA0 model buffer size = 61009.27 MiB load_tensors: CPU_Mapped model buffer size = 1485.00 MiB ................................................................................................. llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 2048 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (128000) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 0.58 MiB llama_kv_cache_unified: CUDA0 KV buffer size = 1024.00 MiB llama_kv_cache_unified: size = 1024.00 MiB ( 4096 cells, 64 layers, 1 seqs), K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility llama_context: CUDA0 compute buffer size = 368.01 MiB llama_context: CUDA_Host compute buffer size = 18.01 MiB llama_context: graph nodes = 2502 llama_context: graph splits = 2 common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096 common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable) mtmd_cli_context: chat template example: <|im_start|>system You are a helpful assistant<|im_end|> <|im_start|>user Hello<|im_end|> <|im_start|>assistant Hi there<|im_end|> <|im_start|>user How are you?<|im_end|> <|im_start|>assistant clip_model_loader: model name: Qwen2.5-Vl-32B-Instruct clip_model_loader: description: clip_model_loader: GGUF version: 3 clip_model_loader: alignment: 32 clip_model_loader: n_tensors: 519 clip_model_loader: n_kv: 31 clip_model_loader: has vision encoder clip_ctx: CLIP using CUDA0 backend load_hparams: projector: qwen2.5vl_merger load_hparams: n_embd: 1280 load_hparams: n_head: 16 load_hparams: n_ff: 3456 load_hparams: n_layer: 32 load_hparams: ffn_op: silu load_hparams: projection_dim: 5120 --- vision hparams --- load_hparams: image_size: 1024 load_hparams: patch_size: 14 load_hparams: has_llava_proj: 0 load_hparams: minicpmv_version: 0 load_hparams: proj_scale_factor: 0 load_hparams: n_wa_pattern: 8 load_hparams: model size: 1314.85 MiB load_hparams: metadata size: 0.18 MiB alloc_compute_meta: CUDA0 compute buffer size = 3.63 MiB alloc_compute_meta: CPU compute buffer size = 0.16 MiB main: loading model: /d/hpc/home/fs90700/.cache/llama.cpp/unsloth_Qwen2.5-VL-32B-Instruct-GGUF_BF16_Qwen2.5-VL-32B-Instruct-BF16-00001-of-00002.gguf Running in chat mode, available commands: /image <path> load an image /clear clear the chat history /quit or /exit exit the program > /image /d/hpc/home/fs90700/medieval/document.jpg /d/hpc/home/fs90700/medieval/document.jpg image loaded > Extract the text on the image. Respond only with the extracted text. encoding image slice... image slice encoded in 291 ms decoding image batch 1/1, n_tokens_batch = 999 image decoded (batch 1/1) in 214 ms 1271, november 17. Falkenberg. Friderik s Falkenberga proda Nem.viteškemu redu v Ljubljani in ob Građašici za 55 mark ogl. Orig.: MHVK XV (1860), str.97. Prim.: F.Richter, Gesch.d.Stadt Lai-bach v Klunovem Archivu f.L.d.H.Krain II.-III, 193; F.Zwitter, Star.kranska mesta,... str.21; J.Žontar, Banke in bankirji... str.22, 32, op.21; M.Kos, Srednjeveška Ljubljana, str.41, op. 151, 153. In nomine Iesu Christi amen. Mora temporis transeunte actus temporis vniuersiter transeunt memoria ab humana, si non scripturum testimonio perhemantur. Quare ego Fridericus de Valchenberch confiteor presencium per tenorem vniuersis presentes uidentibus et uisuris, quod sex propri-- os meos mansos sitos in Awa et circa decursum minoris fluminis dicti Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen-sium pura fide cum omnibus iuribus et attinentiis domus Thevtonicae pro quinquaginta marcis denariorum Aquilwgen^C-s > ^C Interrupted by user llama_perf_context_print: load time = 16771.74 ms llama_perf_context_print: prompt eval time = 606.64 ms / 1023 tokens ( 0.59 ms per token, 1686.32 tokens per second) llama_perf_context_print: eval time = 52384.14 ms / 1886 runs ( 27.78 ms per token, 36.00 tokens per second) llama_perf_context_print: total time = 132631.89 ms / 2909 tokens