Eval bug: ROCml -> ggml_cuda_compute_forward: MUL_MAT failed when running unsloth/Kimi K2
Issue Details
Name and Version
$ build/bin/llama-cli --version ROCm calling rocblas_initialize as a workaround for a rocBLAS bug ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 ROCm devices: Device 0: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 Device 1: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 Device 2: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 version: 5945 (938b7857) built with cc (Ubuntu 14.2.0-19ubuntu2) 14.2.0 for x86_64-linux-gnu
Updated to date on main branch. Compiled with ROCml from default Ubuntu 25.04 packages. no external dependencies, nor custom repositories added. Build details: export HIPCXX="$(hipconfig -l)/clang-17" export HIP_PATH="$(hipconfig -R)" cmake -B build -DGGML_HIP=ON -DAMDGPU_TARGETS=gfx906 -DLLAMA_CURL=ON -DGGML_USE_LLAMAFILE=ON -DGGML_NATIVE=ON -DCMAKE_BUILD_TYPE=Release cmake --build build --config Release -j80 --clean-first
Operating systems
Linux
GGML backends
HIP
Hardware
GPU: 3x AMD Instinct MI50 (32GB variant) CPU: 2x Intel Xeon Gold 6138 Dell R740 server with 1.5TB RAM
Models
unsloth/Kimi-K2-Instruct-GGUF UD-Q2_K_XL
Problem description & steps to reproduce
Kimi K2 throws the error from the title, while Deepseek R1 0528 (also Q2 from unsloth) doesn't give any problems on the same binary and almost exact parameters. (just changing the context, and the ot) I exported CUDA_LAUNCH_BLOCKING=1 but it gave me no extra details than the attached log.
First Bad Commit
No response
Relevant log output
export CUDA_LAUNCH_BLOCKING=1 numactl --interleave=all -- ./llama.cpp/build/bin/llama-cli \ --model models/unsloth/Kimi-K2-Instruct-GGUF/UD-Q2_K_XL/Kimi-K2-Instruct-UD-Q2_K_XL-00001-of-00008.gguf \ --device ROCm0,ROCm1,ROCm2 \ --numa numactl \ --threads 80 \ --prio 3 \ --temp 0.6 \ --min_p 0.01 \ --seed 3407 \ --ctx-size 9182 \ --n-gpu-layers 99 \ -ot "\.(5|6|7|8|9|[0-9][0-9]|[0-9][0-9][0-9])\.ffn_(gate|up|down)_exps.=CPU" \ -no-cnv \ --mlock \ --no-mmap \ -b 1024 -ub 1024 \ --prompt "<|im_system|>system<|im_middle|>You are a helpful assistant<|im_end|><|im_user|>user<|im_middle|>Create a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|im_end|><|im_assistant|>assistant<|im_middle|>" ROCm calling rocblas_initialize as a workaround for a rocBLAS bug ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 ROCm devices: Device 0: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 Device 1: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 Device 2: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 build: 5945 (938b7857) with cc (Ubuntu 14.2.0-19ubuntu2) 14.2.0 for x86_64-linux-gnu main: llama backend init /proc/sys/kernel/numa_balancing is enabled, this has been observed to impair performance main: load the model and apply lora adapter, if any llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon Graphics) - 32394 MiB free llama_model_load_from_file_impl: using device ROCm1 (AMD Radeon Graphics) - 32640 MiB free llama_model_load_from_file_impl: using device ROCm2 (AMD Radeon Graphics) - 32640 MiB free llama_model_loader: additional 7 GGUFs metadata loaded. llama_model_loader: loaded meta data with 61 key-value pairs and 1096 tensors from models/unsloth/Kimi-K2-Instruct-GGUF/UD-Q2_K_XL/Kim i-K2-Instruct-UD-Q2_K_XL-00001-of-00008.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Kimi-K2-Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Kimi-K2-Instruct llama_model_loader: - kv 5: general.quantized_by str = Unsloth llama_model_loader: - kv 6: general.size_label str = 384x14B llama_model_loader: - kv 7: general.license str = other llama_model_loader: - kv 8: general.license.name str = modified-mit llama_model_loader: - kv 9: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 10: general.base_model.count u32 = 1 llama_model_loader: - kv 11: general.base_model.0.name str = Kimi K2 Instruct llama_model_loader: - kv 12: general.base_model.0.organization str = Moonshotai llama_model_loader: - kv 13: general.base_model.0.repo_url str = https://huggingface.co/moonshotai/Kim... llama_model_loader: - kv 14: general.tags arr[str,1] = ["unsloth"] llama_model_loader: - kv 15: deepseek2.block_count u32 = 61 llama_model_loader: - kv 16: deepseek2.context_length u32 = 131072 llama_model_loader: - kv 17: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 18: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 19: deepseek2.attention.head_count u32 = 64 llama_model_loader: - kv 20: deepseek2.attention.head_count_kv u32 = 1 llama_model_loader: - kv 21: deepseek2.rope.freq_base f32 = 50000.000000 llama_model_loader: - kv 22: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 23: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 24: deepseek2.leading_dense_block_count u32 = 1 llama_model_loader: - kv 25: deepseek2.vocab_size u32 = 163840 llama_model_loader: - kv 26: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 27: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 28: deepseek2.attention.key_length u32 = 576 llama_model_loader: - kv 29: deepseek2.attention.value_length u32 = 512 llama_model_loader: - kv 30: deepseek2.attention.key_length_mla u32 = 192 llama_model_loader: - kv 31: deepseek2.attention.value_length_mla u32 = 128 llama_model_loader: - kv 32: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 33: deepseek2.expert_count u32 = 384 llama_model_loader: - kv 34: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 35: deepseek2.expert_weights_scale f32 = 2.827000 llama_model_loader: - kv 36: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 37: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 38: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 39: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 40: deepseek2.rope.scaling.factor f32 = 32.000000 llama_model_loader: - kv 41: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 42: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 43: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 44: tokenizer.ggml.pre str = kimi-k2 llama_model_loader: - kv 45: tokenizer.ggml.tokens arr[str,163840] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 46: tokenizer.ggml.token_type arr[i32,163840] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 47: tokenizer.ggml.merges arr[str,163328] = ["Ġ Ġ", "ĠĠ ĠĠ", "Ġ t", "i n",... llama_model_loader: - kv 48: tokenizer.ggml.bos_token_id u32 = 163584 llama_model_loader: - kv 49: tokenizer.ggml.eos_token_id u32 = 163585 llama_model_loader: - kv 50: tokenizer.ggml.padding_token_id u32 = 163839 llama_model_loader: - kv 51: tokenizer.chat_template str = {%- if tools -%}\n <|im_system|>tool_... llama_model_loader: - kv 52: general.quantization_version u32 = 2 llama_model_loader: - kv 53: general.file_type u32 = 10 llama_model_loader: - kv 54: quantize.imatrix.file str = Kimi-K2-Instruct-GGUF/imatrix_unsloth... llama_model_loader: - kv 55: quantize.imatrix.dataset str = unsloth_calibration_Kimi-K2-Instruct.txt llama_model_loader: - kv 56: quantize.imatrix.entries_count u32 = 667 llama_model_loader: - kv 57: quantize.imatrix.chunks_count u32 = 714 llama_model_loader: - kv 58: split.no u16 = 0 llama_model_loader: - kv 59: split.tensors.count i32 = 1096 llama_model_loader: - kv 60: split.count u16 = 8 llama_model_loader: - type f32: 365 tensors llama_model_loader: - type q8_0: 122 tensors llama_model_loader: - type q2_K: 120 tensors llama_model_loader: - type q3_K: 52 tensors llama_model_loader: - type q4_K: 399 tensors llama_model_loader: - type q5_K: 22 tensors llama_model_loader: - type q6_K: 16 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q2_K - Medium print_info: file size = 354.93 GiB (2.97 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 256 load: token to piece cache size = 1.0607 MB print_info: arch = deepseek2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 7168 print_info: n_layer = 61 print_info: n_head = 64 print_info: n_head_kv = 1 print_info: n_rot = 64 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 576 print_info: n_embd_head_v = 512 print_info: n_gqa = 64 print_info: n_embd_k_gqa = 576 print_info: n_embd_v_gqa = 512 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 18432 print_info: n_expert = 384 print_info: n_expert_used = 8 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = yarn print_info: freq_base_train = 50000.0 print_info: freq_scale_train = 0.03125 print_info: n_ctx_orig_yarn = 4096 print_info: rope_finetuned = unknown print_info: model type = 671B print_info: model params = 1.03 T print_info: general.name = Kimi-K2-Instruct print_info: n_layer_dense_lead = 1 print_info: n_lora_q = 1536 print_info: n_lora_kv = 512 print_info: n_embd_head_k_mla = 192 print_info: n_embd_head_v_mla = 128 print_info: n_ff_exp = 2048 print_info: n_expert_shared = 1 print_info: expert_weights_scale = 2.8 print_info: expert_weights_norm = 1 print_info: expert_gating_func = sigmoid print_info: rope_yarn_log_mul = 0.1000 print_info: vocab type = BPE print_info: n_vocab = 163840 print_info: n_merges = 163328 print_info: BOS token = 163584 '[BOS]' print_info: EOS token = 163585 '[EOS]' print_info: EOT token = 163586 '<|im_end|>' print_info: PAD token = 163839 '[PAD]' print_info: LF token = 198 'Ċ' print_info: EOG token = 163585 '[EOS]' print_info: EOG token = 163586 '<|im_end|>' print_info: max token length = 512 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 61 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 62/62 layers to GPU load_tensors: ROCm0 model buffer size = 26944.94 MiB load_tensors: ROCm1 model buffer size = 1945.94 MiB load_tensors: ROCm2 model buffer size = 2714.32 MiB load_tensors: CPU model buffer size = 331842.00 MiB .................................................................................................... llama_context: constructing llama_context llama_context: non-unified KV cache requires ggml_set_rows() - forcing unified KV cache llama_context: n_seq_max = 1 llama_context: n_ctx = 9182 llama_context: n_ctx_per_seq = 9182 llama_context: n_batch = 1024 llama_context: n_ubatch = 1024 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: kv_unified = true llama_context: freq_base = 50000.0 llama_context: freq_scale = 0.03125 llama_context: n_ctx_per_seq (9182) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: ROCm_Host output buffer size = 0.62 MiB llama_kv_cache_unified: ROCm0 KV buffer size = 400.23 MiB llama_kv_cache_unified: ROCm1 KV buffer size = 400.23 MiB llama_kv_cache_unified: ROCm2 KV buffer size = 362.11 MiB llama_kv_cache_unified: size = 1162.57 MiB ( 9184 cells, 61 layers, 1/ 1 seqs), K (f16): 615.48 MiB, V (f16): 547.09 MiB llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility llama_context: ROCm0 compute buffer size = 4817.50 MiB llama_context: ROCm1 compute buffer size = 2735.88 MiB llama_context: ROCm2 compute buffer size = 2735.88 MiB llama_context: ROCm_Host compute buffer size = 63.89 MiB llama_context: graph nodes = 5035 llama_context: graph splits = 212 (with bs=1024), 116 (with bs=1) common_init_from_params: added [EOS] logit bias = -inf common_init_from_params: added <|im_end|> logit bias = -inf common_init_from_params: setting dry_penalty_last_n to ctx_size = 9184 common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable) main: llama threadpool init, n_threads = 80 system_info: n_threads = 80 (n_threads_batch = 80) / 80 | ROCm : NO_VMM = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 | sampler seed: 3407 sampler params: repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000 dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 9184 top_k = 40, top_p = 0.950, min_p = 0.010, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1. 000, temp = 0.600 mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 sampler chain: logits -> logit-bias -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dis t generate: n_ctx = 9184, n_batch = 1024, n_predict = -1, n_keep = 0 ... /home/ai/llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:79: ROCm error ggml_cuda_compute_forward: MUL_MAT failed ROCm error: shared object initialization failed current device: 1, in function ggml_cuda_compute_forward at /home/dc/ai/llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:2482 err ... (many instances of this line) ... [New LWP 19067] [New LWP 19046] [New LWP 19043] This GDB supports auto-downloading debuginfo from the following URLs: <https://debuginfod.ubuntu.com> Enable debuginfod for this session? (y or [n]) [answered N; input not from terminal] Debuginfod has been disabled. To make this setting permanent, add 'set debuginfod enabled off' to .gdbinit. Function(s) ^std::(move|forward|as_const|(__)?addressof) will be skipped when stepping. Function(s) ^std::(shared|unique)_ptr<.*>::(get|operator) will be skipped when stepping. Function(s) ^std::(basic_string|vector|array|deque|(forward_)?list|(unordered_|flat_)?(multi)?(map|set)|span)<.*>::(c?r?(begin|end)|fr ont|back|data|size|empty) will be skipped when stepping. Function(s) ^std::(basic_string|vector|array|deque|span)<.*>::operator.] will be skipped when stepping. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". __syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56 warning: 56 ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S: No such file or directory #0 __syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56 56 in ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S #1 0x00007b8ed969eb63 in __internal_syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=<optimized out>, a4=<optimized out>, a 5=0, a6=0, nr=61) at ./nptl/cancellation.c:49 warning: 49 ./nptl/cancellation.c: No such file or directory #2 __syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=<optimized out>, a4=<optimized out>, a5=a5@entry=0, a6=a6@entry=0, nr =61) at ./nptl/cancellation.c:75 75 in ./nptl/cancellation.c #3 0x00007b8ed971afdf in __GI___wait4 (pid=<optimized out>, stat_loc=<optimized out>, options=<optimized out>, usage=<optimized out>) at ../sysdeps/unix/sysv/linux/wait4.c:30 warning: 30 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory #4 0x00007b8ed9e1add3 in ggml_print_backtrace () from /home/dc/ai/llama.cpp/build/bin/libggml-base.so #5 0x00007b8ed9e1af7b in ggml_abort () from /home/dc/ai/llama.cpp/build/bin/libggml-base.so #6 0x00007b8ed6bcf9b2 in ggml_cuda_error(char const*, char const*, char const*, int, char const*) () from /home/dc/ai/llama.cpp/build/bin/libggml-hip.so #7 0x00007b8ed6bd6550 in ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) () from /home/dc/ai/llama.cpp/build/bin/libggml-hip.so #8 0x00007b8ed9e33334 in ggml_backend_sched_graph_compute_async () from /home/dc/ai/llama.cpp/build/bin/libggml-base.so #9 0x00007b8ed9f402b1 in llama_context::graph_compute(ggml_cgraph*, bool) () from /home/dc/ai/llama.cpp/build/bin/libllama.so #10 0x00007b8ed9f41e0c in llama_context::process_ubatch(llama_ubatch const&, llm_graph_type, llama_memory_context_i*, ggml_status&) () from /home/dc/ai/llama.cpp/build/bin/libllama.so #11 0x00007b8ed9f46af1 in llama_context::decode(llama_batch const&) () from /home/dc/ai/llama.cpp/build/bin/libllama.so #12 0x00007b8ed9f47bef in llama_decode () from /home/dc/ai/llama.cpp/build/bin/libllama.so #13 0x00006465c75b858f in main () [Inferior 1 (process 19042) detached] ./test_kimi_amd.sh: line 31: 19042 Aborted
Issue Details
Eval bug: ROCml -> ggml_cuda_compute_forward: MUL_MAT failed when running unsloth/Kimi K2
Name and Version
$ build/bin/llama-cli --version ROCm calling rocblas_initialize as a workaround for a rocBLAS bug ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 ROCm devices: Device 0: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 Device 1: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 Device 2: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 version: 5945 (938b7857) built with cc (Ubuntu 14.2.0-19ubuntu2) 14.2.0 for x86_64-linux-gnu
Updated to date on main branch. Compiled with ROCml from default Ubuntu 25.04 packages. no external dependencies, nor custom repositories added. Build details: export HIPCXX="$(hipconfig -l)/clang-17" export HIP_PATH="$(hipconfig -R)" cmake -B build -DGGML_HIP=ON -DAMDGPU_TARGETS=gfx906 -DLLAMA_CURL=ON -DGGML_USE_LLAMAFILE=ON -DGGML_NATIVE=ON -DCMAKE_BUILD_TYPE=Release cmake --build build --config Release -j80 --clean-first
Operating systems
Linux
GGML backends
HIP
Hardware
GPU: 3x AMD Instinct MI50 (32GB variant) CPU: 2x Intel Xeon Gold 6138 Dell R740 server with 1.5TB RAM
Models
unsloth/Kimi-K2-Instruct-GGUF UD-Q2_K_XL
Problem description & steps to reproduce
Kimi K2 throws the error from the title, while Deepseek R1 0528 (also Q2 from unsloth) doesn't give any problems on the same binary and almost exact parameters. (just changing the context, and the ot) I exported CUDA_LAUNCH_BLOCKING=1 but it gave me no extra details than the attached log.
First Bad Commit
No response
Relevant log output
export CUDA_LAUNCH_BLOCKING=1 numactl --interleave=all -- ./llama.cpp/build/bin/llama-cli \ --model models/unsloth/Kimi-K2-Instruct-GGUF/UD-Q2_K_XL/Kimi-K2-Instruct-UD-Q2_K_XL-00001-of-00008.gguf \ --device ROCm0,ROCm1,ROCm2 \ --numa numactl \ --threads 80 \ --prio 3 \ --temp 0.6 \ --min_p 0.01 \ --seed 3407 \ --ctx-size 9182 \ --n-gpu-layers 99 \ -ot "\.(5|6|7|8|9|[0-9][0-9]|[0-9][0-9][0-9])\.ffn_(gate|up|down)_exps.=CPU" \ -no-cnv \ --mlock \ --no-mmap \ -b 1024 -ub 1024 \ --prompt "<|im_system|>system<|im_middle|>You are a helpful assistant<|im_end|><|im_user|>user<|im_middle|>Create a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|im_end|><|im_assistant|>assistant<|im_middle|>" ROCm calling rocblas_initialize as a workaround for a rocBLAS bug ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 ROCm devices: Device 0: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 Device 1: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 Device 2: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 build: 5945 (938b7857) with cc (Ubuntu 14.2.0-19ubuntu2) 14.2.0 for x86_64-linux-gnu main: llama backend init /proc/sys/kernel/numa_balancing is enabled, this has been observed to impair performance main: load the model and apply lora adapter, if any llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon Graphics) - 32394 MiB free llama_model_load_from_file_impl: using device ROCm1 (AMD Radeon Graphics) - 32640 MiB free llama_model_load_from_file_impl: using device ROCm2 (AMD Radeon Graphics) - 32640 MiB free llama_model_loader: additional 7 GGUFs metadata loaded. llama_model_loader: loaded meta data with 61 key-value pairs and 1096 tensors from models/unsloth/Kimi-K2-Instruct-GGUF/UD-Q2_K_XL/Kim i-K2-Instruct-UD-Q2_K_XL-00001-of-00008.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Kimi-K2-Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Kimi-K2-Instruct llama_model_loader: - kv 5: general.quantized_by str = Unsloth llama_model_loader: - kv 6: general.size_label str = 384x14B llama_model_loader: - kv 7: general.license str = other llama_model_loader: - kv 8: general.license.name str = modified-mit llama_model_loader: - kv 9: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 10: general.base_model.count u32 = 1 llama_model_loader: - kv 11: general.base_model.0.name str = Kimi K2 Instruct llama_model_loader: - kv 12: general.base_model.0.organization str = Moonshotai llama_model_loader: - kv 13: general.base_model.0.repo_url str = https://huggingface.co/moonshotai/Kim... llama_model_loader: - kv 14: general.tags arr[str,1] = ["unsloth"] llama_model_loader: - kv 15: deepseek2.block_count u32 = 61 llama_model_loader: - kv 16: deepseek2.context_length u32 = 131072 llama_model_loader: - kv 17: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 18: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 19: deepseek2.attention.head_count u32 = 64 llama_model_loader: - kv 20: deepseek2.attention.head_count_kv u32 = 1 llama_model_loader: - kv 21: deepseek2.rope.freq_base f32 = 50000.000000 llama_model_loader: - kv 22: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 23: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 24: deepseek2.leading_dense_block_count u32 = 1 llama_model_loader: - kv 25: deepseek2.vocab_size u32 = 163840 llama_model_loader: - kv 26: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 27: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 28: deepseek2.attention.key_length u32 = 576 llama_model_loader: - kv 29: deepseek2.attention.value_length u32 = 512 llama_model_loader: - kv 30: deepseek2.attention.key_length_mla u32 = 192 llama_model_loader: - kv 31: deepseek2.attention.value_length_mla u32 = 128 llama_model_loader: - kv 32: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 33: deepseek2.expert_count u32 = 384 llama_model_loader: - kv 34: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 35: deepseek2.expert_weights_scale f32 = 2.827000 llama_model_loader: - kv 36: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 37: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 38: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 39: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 40: deepseek2.rope.scaling.factor f32 = 32.000000 llama_model_loader: - kv 41: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 42: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 43: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 44: tokenizer.ggml.pre str = kimi-k2 llama_model_loader: - kv 45: tokenizer.ggml.tokens arr[str,163840] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 46: tokenizer.ggml.token_type arr[i32,163840] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 47: tokenizer.ggml.merges arr[str,163328] = ["Ġ Ġ", "ĠĠ ĠĠ", "Ġ t", "i n",... llama_model_loader: - kv 48: tokenizer.ggml.bos_token_id u32 = 163584 llama_model_loader: - kv 49: tokenizer.ggml.eos_token_id u32 = 163585 llama_model_loader: - kv 50: tokenizer.ggml.padding_token_id u32 = 163839 llama_model_loader: - kv 51: tokenizer.chat_template str = {%- if tools -%}\n <|im_system|>tool_... llama_model_loader: - kv 52: general.quantization_version u32 = 2 llama_model_loader: - kv 53: general.file_type u32 = 10 llama_model_loader: - kv 54: quantize.imatrix.file str = Kimi-K2-Instruct-GGUF/imatrix_unsloth... llama_model_loader: - kv 55: quantize.imatrix.dataset str = unsloth_calibration_Kimi-K2-Instruct.txt llama_model_loader: - kv 56: quantize.imatrix.entries_count u32 = 667 llama_model_loader: - kv 57: quantize.imatrix.chunks_count u32 = 714 llama_model_loader: - kv 58: split.no u16 = 0 llama_model_loader: - kv 59: split.tensors.count i32 = 1096 llama_model_loader: - kv 60: split.count u16 = 8 llama_model_loader: - type f32: 365 tensors llama_model_loader: - type q8_0: 122 tensors llama_model_loader: - type q2_K: 120 tensors llama_model_loader: - type q3_K: 52 tensors llama_model_loader: - type q4_K: 399 tensors llama_model_loader: - type q5_K: 22 tensors llama_model_loader: - type q6_K: 16 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q2_K - Medium print_info: file size = 354.93 GiB (2.97 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 256 load: token to piece cache size = 1.0607 MB print_info: arch = deepseek2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 7168 print_info: n_layer = 61 print_info: n_head = 64 print_info: n_head_kv = 1 print_info: n_rot = 64 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 576 print_info: n_embd_head_v = 512 print_info: n_gqa = 64 print_info: n_embd_k_gqa = 576 print_info: n_embd_v_gqa = 512 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 18432 print_info: n_expert = 384 print_info: n_expert_used = 8 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = yarn print_info: freq_base_train = 50000.0 print_info: freq_scale_train = 0.03125 print_info: n_ctx_orig_yarn = 4096 print_info: rope_finetuned = unknown print_info: model type = 671B print_info: model params = 1.03 T print_info: general.name = Kimi-K2-Instruct print_info: n_layer_dense_lead = 1 print_info: n_lora_q = 1536 print_info: n_lora_kv = 512 print_info: n_embd_head_k_mla = 192 print_info: n_embd_head_v_mla = 128 print_info: n_ff_exp = 2048 print_info: n_expert_shared = 1 print_info: expert_weights_scale = 2.8 print_info: expert_weights_norm = 1 print_info: expert_gating_func = sigmoid print_info: rope_yarn_log_mul = 0.1000 print_info: vocab type = BPE print_info: n_vocab = 163840 print_info: n_merges = 163328 print_info: BOS token = 163584 '[BOS]' print_info: EOS token = 163585 '[EOS]' print_info: EOT token = 163586 '<|im_end|>' print_info: PAD token = 163839 '[PAD]' print_info: LF token = 198 'Ċ' print_info: EOG token = 163585 '[EOS]' print_info: EOG token = 163586 '<|im_end|>' print_info: max token length = 512 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 61 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 62/62 layers to GPU load_tensors: ROCm0 model buffer size = 26944.94 MiB load_tensors: ROCm1 model buffer size = 1945.94 MiB load_tensors: ROCm2 model buffer size = 2714.32 MiB load_tensors: CPU model buffer size = 331842.00 MiB .................................................................................................... llama_context: constructing llama_context llama_context: non-unified KV cache requires ggml_set_rows() - forcing unified KV cache llama_context: n_seq_max = 1 llama_context: n_ctx = 9182 llama_context: n_ctx_per_seq = 9182 llama_context: n_batch = 1024 llama_context: n_ubatch = 1024 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: kv_unified = true llama_context: freq_base = 50000.0 llama_context: freq_scale = 0.03125 llama_context: n_ctx_per_seq (9182) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: ROCm_Host output buffer size = 0.62 MiB llama_kv_cache_unified: ROCm0 KV buffer size = 400.23 MiB llama_kv_cache_unified: ROCm1 KV buffer size = 400.23 MiB llama_kv_cache_unified: ROCm2 KV buffer size = 362.11 MiB llama_kv_cache_unified: size = 1162.57 MiB ( 9184 cells, 61 layers, 1/ 1 seqs), K (f16): 615.48 MiB, V (f16): 547.09 MiB llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility llama_context: ROCm0 compute buffer size = 4817.50 MiB llama_context: ROCm1 compute buffer size = 2735.88 MiB llama_context: ROCm2 compute buffer size = 2735.88 MiB llama_context: ROCm_Host compute buffer size = 63.89 MiB llama_context: graph nodes = 5035 llama_context: graph splits = 212 (with bs=1024), 116 (with bs=1) common_init_from_params: added [EOS] logit bias = -inf common_init_from_params: added <|im_end|> logit bias = -inf common_init_from_params: setting dry_penalty_last_n to ctx_size = 9184 common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable) main: llama threadpool init, n_threads = 80 system_info: n_threads = 80 (n_threads_batch = 80) / 80 | ROCm : NO_VMM = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 | sampler seed: 3407 sampler params: repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000 dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 9184 top_k = 40, top_p = 0.950, min_p = 0.010, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1. 000, temp = 0.600 mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 sampler chain: logits -> logit-bias -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dis t generate: n_ctx = 9184, n_batch = 1024, n_predict = -1, n_keep = 0 ... /home/ai/llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:79: ROCm error ggml_cuda_compute_forward: MUL_MAT failed ROCm error: shared object initialization failed current device: 1, in function ggml_cuda_compute_forward at /home/dc/ai/llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:2482 err ... (many instances of this line) ... [New LWP 19067] [New LWP 19046] [New LWP 19043] This GDB supports auto-downloading debuginfo from the following URLs: <https://debuginfod.ubuntu.com> Enable debuginfod for this session? (y or [n]) [answered N; input not from terminal] Debuginfod has been disabled. To make this setting permanent, add 'set debuginfod enabled off' to .gdbinit. Function(s) ^std::(move|forward|as_const|(__)?addressof) will be skipped when stepping. Function(s) ^std::(shared|unique)_ptr<.*>::(get|operator) will be skipped when stepping. Function(s) ^std::(basic_string|vector|array|deque|(forward_)?list|(unordered_|flat_)?(multi)?(map|set)|span)<.*>::(c?r?(begin|end)|fr ont|back|data|size|empty) will be skipped when stepping. Function(s) ^std::(basic_string|vector|array|deque|span)<.*>::operator.] will be skipped when stepping. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". __syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56 warning: 56 ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S: No such file or directory #0 __syscall_cancel_arch () at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56 56 in ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S #1 0x00007b8ed969eb63 in __internal_syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=<optimized out>, a4=<optimized out>, a 5=0, a6=0, nr=61) at ./nptl/cancellation.c:49 warning: 49 ./nptl/cancellation.c: No such file or directory #2 __syscall_cancel (a1=<optimized out>, a2=<optimized out>, a3=<optimized out>, a4=<optimized out>, a5=a5@entry=0, a6=a6@entry=0, nr =61) at ./nptl/cancellation.c:75 75 in ./nptl/cancellation.c #3 0x00007b8ed971afdf in __GI___wait4 (pid=<optimized out>, stat_loc=<optimized out>, options=<optimized out>, usage=<optimized out>) at ../sysdeps/unix/sysv/linux/wait4.c:30 warning: 30 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory #4 0x00007b8ed9e1add3 in ggml_print_backtrace () from /home/dc/ai/llama.cpp/build/bin/libggml-base.so #5 0x00007b8ed9e1af7b in ggml_abort () from /home/dc/ai/llama.cpp/build/bin/libggml-base.so #6 0x00007b8ed6bcf9b2 in ggml_cuda_error(char const*, char const*, char const*, int, char const*) () from /home/dc/ai/llama.cpp/build/bin/libggml-hip.so #7 0x00007b8ed6bd6550 in ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) () from /home/dc/ai/llama.cpp/build/bin/libggml-hip.so #8 0x00007b8ed9e33334 in ggml_backend_sched_graph_compute_async () from /home/dc/ai/llama.cpp/build/bin/libggml-base.so #9 0x00007b8ed9f402b1 in llama_context::graph_compute(ggml_cgraph*, bool) () from /home/dc/ai/llama.cpp/build/bin/libllama.so #10 0x00007b8ed9f41e0c in llama_context::process_ubatch(llama_ubatch const&, llm_graph_type, llama_memory_context_i*, ggml_status&) () from /home/dc/ai/llama.cpp/build/bin/libllama.so #11 0x00007b8ed9f46af1 in llama_context::decode(llama_batch const&) () from /home/dc/ai/llama.cpp/build/bin/libllama.so #12 0x00007b8ed9f47bef in llama_decode () from /home/dc/ai/llama.cpp/build/bin/libllama.so #13 0x00006465c75b858f in main () [Inferior 1 (process 19042) detached] ./test_kimi_amd.sh: line 31: 19042 Aborted