Eval bug: Regression: Tool calls still returned in content field as JSON string instead of tool_calls array
Issue Details
Name and Version
Environment
./llama-server --version version: 5894 (494c5899) built with Apple clang version 17.0.0 (clang-1700.0.13.5) for arm64-apple-darwin24.5.0
Server Configuration
export LLAMA_ARG_MODEL="/Users/alhena/server/models/Devstral-Small-2507-UD-Q8_K_XL.gguf" export LLAMA_ARG_PORT="11454" export LLAMA_ARG_CTX_SIZE="16384" export LLAMA_ARG_HOST=0.0.0.0 export LLAMA_ARG_JINJA=1 export LLAMA_ARG_FA=1
Operating systems
Mac
GGML backends
Metal
Hardware
Mac Studio 64GB - 10 CPU cores 32 GPU cores
Models
https://huggingface.co/unsloth/Devstral-Small-2507-GGUF
Problem description & steps to reproduce
Problem Description
Tool calls are intermittently returned as a JSON string in the content
field instead of being properly formatted in the tool_calls
array. This appears to be a regression of issue #12256 which was supposedly fixed by PR #12291.
Observed Behavior
The issue is intermittent - the same server with the same configuration sometimes returns tool calls correctly and sometimes returns them as JSON strings in the content field.
Case 1: Incorrect format (tool calls as JSON string in content)
# From 141523_454_llamacpp_response.yaml timestamp: '2025-07-15T14:15:33.019553' data: choices: - finish_reason: stop index: 0 message: role: assistant content: "{\n \"tool_calls\": [\n {\n \"name\": \"json_set\",\n \"arguments\": {\n \"path\": \"$.game_scores.G02.scores.Sottocastello\",\n \"value\": 45.12\n },\n \"id\": \"74e27f938\"\n }\n ]\n}"
Case 2: Correct format (tool calls properly structured)
# From 145858_878_llamacpp_response.yaml timestamp: '2025-07-15T14:59:14.411130' data: choices: - finish_reason: tool_calls index: 0 message: role: assistant content: null tool_calls: - type: function function: name: json_set arguments: '{"path":"$.game_scores.G02.status","value":"completed"}' id: i011r411j
Note the differences:
- Incorrect:
finish_reason: stop
, tool calls in content field as JSON string - Correct:
finish_reason: tool_calls
, proper tool_calls array,content: null
Request Details
The request includes multiple tool definitions in OpenAI format:
# From 141523_454_llamacpp_request.yaml (truncated) tools: - type: function function: name: json_set description: 'Imposta un valore a un path JSON specifico...' parameters: type: object properties: path: type: string description: JSONPath dove impostare il valore value: type: [string, number, boolean, object, array, 'null'] description: Il valore da impostare required: [path, value] tool_choice: auto
Additional Context
- The issue is intermittent - sometimes tool calls are formatted correctly, other times they appear as JSON strings in content
- Using
LLAMA_ARG_JINJA=true
which should enable proper tool call handling - The model (Devstral) should support tool calling based on the successful parsing of tool call structure
Logs
Full request/response YAML files available showing the complete interaction where this issue occurred.
This regression impacts OpenAI API compatibility and breaks downstream applications expecting properly formatted tool calls.
First Bad Commit
No response
Relevant log output
llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 16384 llama_context: n_ctx_per_seq = 16384 llama_context: n_batch = 2048 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (16384) < n_ctx_train (131072) -- the full capacity of the model will not be utilized ggml_metal_init: allocating ggml_metal_init: found device: Apple M1 Max ggml_metal_init: picking default device: Apple M1 Max ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M1 Max ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = true ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_set_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_c4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f16 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) llama_context: CPU output buffer size = 0.50 MiB llama_kv_cache_unified: Metal KV buffer size = 2560.00 MiB llama_kv_cache_unified: size = 2560.00 MiB ( 16384 cells, 40 layers, 1 seqs), K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility llama_context: Metal compute buffer size = 1092.00 MiB llama_context: CPU compute buffer size = 42.01 MiB llama_context: graph nodes = 1446 llama_context: graph splits = 2 common_init_from_params: setting dry_penalty_last_n to ctx_size = 16384 common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable) srv init: initializing slots, n_slots = 1 slot init: id 0 | task -1 | new slot n_ctx_slot = 16384 main: model loaded main: chat template, chat_template: {#- Copyright 2025-present the Unsloth team. All rights reserved. #} {#- Licensed under the Apache License, Version 2.0 (the "License") #} {#- Edits made by Unsloth #} {%- set default_system_message = 'You are Devstral, a helpful agentic model trained by Mistral AI and using the OpenHands scaffold. You can interact with a computer to solve tasks.\n\n<ROLE>\nYour primary role is to assist users by executing commands, modifying code, and solving technical problems effectively. You should be thorough, methodical, and prioritize quality over speed.\n* If the user asks a question, like \"why is X happening\", don\'t try to fix the problem. Just give an answer to the question.\n</ROLE>\n\n<EFFICIENCY>\n* Each action you take is somewhat expensive. Wherever possible, combine multiple actions into a single action, e.g. combine multiple bash commands into one, using sed and grep to edit/view multiple files at once.\n* When exploring the codebase, use efficient tools like find, grep, and git commands with appropriate filters to minimize unnecessary operations.\n</EFFICIENCY>\n\n<FILE_SYSTEM_GUIDELINES>\n* When a user provides a file path, do NOT assume it\'s relative to the current working directory. First explore the file system to locate the file before working on it.\n* If asked to edit a file, edit the file directly, rather than creating a new file with a different filename.\n* For global search-and-replace operations, consider using `sed` instead of opening file editors multiple times.\n</FILE_SYSTEM_GUIDELINES>\n\n<CODE_QUALITY>\n* Write clean, efficient code with minimal comments. Avoid redundancy in comments: Do not repeat information that can be easily inferred from the code itself.\n* When implementing solutions, focus on making the minimal changes needed to solve the problem.\n* Before implementing any changes, first thoroughly understand the codebase through exploration.\n* If you are adding a lot of code to a function or file, consider splitting the function or file into smaller pieces when appropriate.\n</CODE_QUALITY>\n\n<VERSION_CONTROL>\n* When configuring git credentials, use \"openhands\" as the user.name and \"openhands@all-hands.dev\" as the user.email by default, unless explicitly instructed otherwise.\n* Exercise caution with git operations. Do NOT make potentially dangerous changes (e.g., pushing to main, deleting repositories) unless explicitly asked to do so.\n* When committing changes, use `git status` to see all modified files, and stage all files necessary for the commit. Use `git commit -a` whenever possible.\n* Do NOT commit files that typically shouldn\'t go into version control (e.g., node_modules/, .env files, build directories, cache files, large binaries) unless explicitly instructed by the user.\n* If unsure about committing certain files, check for the presence of .gitignore files or ask the user for clarification.\n</VERSION_CONTROL>\n\n<PULL_REQUESTS>\n* When creating pull requests, create only ONE per session/issue unless explicitly instructed otherwise.\n* When working with an existing PR, update it with new commits rather than creating additional PRs for the same issue.\n* When updating a PR, preserve the original PR title and purpose, updating description only when necessary.\n</PULL_REQUESTS>\n\n<PROBLEM_SOLVING_WORKFLOW>\n1. EXPLORATION: Thoroughly explore relevant files and understand the context before proposing solutions\n2. ANALYSIS: Consider multiple approaches and select the most promising one\n3. TESTING:\n * For bug fixes: Create tests to verify issues before implementing fixes\n * For new features: Consider test-driven development when appropriate\n * If the repository lacks testing infrastructure and implementing tests would require extensive setup, consult with the user before investing time in building testing infrastructure\n * If the environment is not set up to run tests, consult with the user first before investing time to install all dependencies\n4. IMPLEMENTATION: Make focused, minimal changes to address the problem\n5. VERIFICATION: If the environment is set up to run tests, test your implementation thoroughly, including edge cases. If the environment is not set up to run tests, consult with the user first before investing time to run tests.\n</PROBLEM_SOLVING_WORKFLOW>\n\n<SECURITY>\n* Only use GITHUB_TOKEN and other credentials in ways the user has explicitly requested and would expect.\n* Use APIs to work with GitHub or other platforms, unless the user asks otherwise or your task requires browsing.\n</SECURITY>\n\n<ENVIRONMENT_SETUP>\n* When user asks you to run an application, don\'t stop if the application is not installed. Instead, please install the application and run the command again.\n* If you encounter missing dependencies:\n 1. First, look around in the repository for existing dependency files (requirements.txt, pyproject.toml, package.json, Gemfile, etc.)\n 2. If dependency files exist, use them to install all dependencies at once (e.g., `pip install -r requirements.txt`, `npm install`, etc.)\n 3. Only install individual packages directly if no dependency files are found or if only specific packages are needed\n* Similarly, if you encounter missing dependencies for essential tools requested by the user, install them when possible.\n</ENVIRONMENT_SETUP>\n\n<TROUBLESHOOTING>\n* If you\'ve made repeated attempts to solve a problem but tests still fail or the user reports it\'s still broken:\n 1. Step back and reflect on 5-7 different possible sources of the problem\n 2. Assess the likelihood of each possible cause\n 3. Methodically address the most likely causes, starting with the highest probability\n 4. Document your reasoning process\n* When you run into any major issue while executing a plan from the user, please don\'t try to directly work around it. Instead, propose a new plan and confirm with the user before proceeding.\n</TROUBLESHOOTING>' %} {{- bos_token }} {%- if messages[0]['role'] == 'system' %} {%- if messages[0]['content'] is string %} {%- set system_message = messages[0]['content'] %} {%- else %} {%- set system_message = messages[0]['content'][0]['text'] %} {%- endif %} {%- set loop_messages = messages[1:] %} {%- else %} {%- set system_message = default_system_message %} {%- set loop_messages = messages %} {%- endif %} {{- '[SYSTEM_PROMPT]' + system_message + '[/SYSTEM_PROMPT]' }} {#- Tool description appended ONLY to last user message. Edits made by Unsloth #} {#- Tool description appended also if last message is tool. Edits made by Unsloth #} {%- set tools_description = "" %} {%- set has_tools = false %} {%- if tools is defined and tools is not none and tools|length > 0 %} {%- set has_tools = true %} {%- set tools_description = "[AVAILABLE_TOOLS]" + (tools | tojson) + "[/AVAILABLE_TOOLS]" %} {{- tools_description }} {%- endif %} {%- for message in loop_messages %} {%- if message['role'] == 'user' %} {%- if message['content'] is string %} {{- '[INST]' + message['content'] + '[/INST]' }} {%- else %} {{- '[INST]' }} {%- for block in message['content'] %} {%- if block['type'] == 'text' %} {#- Original did not have content which is weird. Added by Un-sloth. #} {%- if block['text'] is defined %} {{- block['text'] }} {%- else %} {{- block['content'] }} {%- endif %} {%- elif block['type'] in ['image', 'image_url'] %} {{- '[IMG]' }} {%- else %} {{- raise_exception('Only text and image blocks are supported in message content!') }} {%- endif %} {%- endfor %} {{- '[/INST]' }} {%- endif %} {%- elif message['role'] == 'system' %} {%- if message['content'] is string %} {{- '[SYSTEM_PROMPT]' + message['content'] + '[/SYSTEM_PROMPT]' }} {%- else %} {{- '[SYSTEM_PROMPT]' + message['content'][0]['text'] + '[/SYSTEM_PROMPT]' }} {%- endif %} {%- elif message['role'] == 'assistant' %} {%- if message['content'] is string %} {{- message['content'] }} {%- else %} {{- message['content'][0]['text'] }} {%- endif %} {#- If User,Assistant,Tool,Tool we also need to append tools_description. Edits made by Unsloth #} {%- if message['tool_calls'] is defined and message['tool_calls'] is not none %} {%- for tool in message['tool_calls'] %} {%- set arguments = tool['function']['arguments'] %} {%- if arguments is not string %} {%- set arguments = arguments|tojson %} {%- endif %} {#- Must list tool calls AFTER assistant. Edits made by Un-sloth #} {{- "[TOOL_CALLS]" + tool['function']['name'] + "[ARGS]" + arguments }} {%- endfor %} {%- endif %} {{- eos_token }} {%- elif message["role"] == "tool_results" or message["role"] == "tool" %} {%- if message.content is defined and message.content.content is defined %} {%- set content = message.content.content %} {%- else %} {%- set content = message.content %} {%- endif %} {{- "[TOOL_RESULTS]" + content|string + "[/TOOL_RESULTS]" }} {%- else %} {{- raise_exception('Only user, systemm assistant and tool roles are supported in the custom template made by Unsloth!') }} {%- endif %} {%- endfor %} {#- Copyright 2025-present the Unsloth team. All rights reserved. #} {#- Licensed under the Apache License, Version 2.0 (the "License") #}, example_format: '[SYSTEM_PROMPT]You are a helpful assistant[/SYSTEM_PROMPT][INST]Hello[/INST]Hi there</s>[INST]How are you?[/INST]' main: server is listening on http://0.0.0.0:11454 - starting the main loop srv update_slots: all slots are idle srv params_from_: Chat format: Mistral Nemo slot launch_slot_: id 0 | task 0 | processing task slot update_slots: id 0 | task 0 | new prompt, n_ctx_slot = 16384, n_keep = 0, n_prompt_tokens = 6551 slot update_slots: id 0 | task 0 | kv cache rm [0, end) slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 2048, n_tokens = 2048, progress = 0.312624 slot update_slots: id 0 | task 0 | kv cache rm [2048, end) slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 4096, n_tokens = 2048, progress = 0.625248 slot update_slots: id 0 | task 0 | kv cache rm [4096, end) slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 6144, n_tokens = 2048, progress = 0.937872 slot update_slots: id 0 | task 0 | kv cache rm [6144, end) slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 6551, n_tokens = 407, progress = 1.000000 slot update_slots: id 0 | task 0 | prompt done, n_past = 6551, n_tokens = 407 slot release: id 0 | task 0 | stop processing: n_past = 6584, truncated = 0 slot print_timing: id 0 | task 0 | prompt eval time = 53959.64 ms / 6551 tokens ( 8.24 ms per token, 121.41 tokens per second) eval time = 3519.75 ms / 34 tokens ( 103.52 ms per token, 9.66 tokens per second) total time = 57479.39 ms / 6585 tokens srv update_slots: all slots are idle srv log_server_r: request: POST /v1/chat/completions 192.168.1.128 200 srv params_from_: Chat format: Mistral Nemo slot launch_slot_: id 0 | task 38 | processing task slot update_slots: id 0 | task 38 | new prompt, n_ctx_slot = 16384, n_keep = 0, n_prompt_tokens = 6655 slot update_slots: id 0 | task 38 | kv cache rm [6551, end) slot update_slots: id 0 | task 38 | prompt processing progress, n_past = 6655, n_tokens = 104, progress = 0.015627 slot update_slots: id 0 | task 38 | prompt done, n_past = 6655, n_tokens = 104 slot release: id 0 | task 38 | stop processing: n_past = 6701, truncated = 0 slot print_timing: id 0 | task 38 | prompt eval time = 899.70 ms / 104 tokens ( 8.65 ms per token, 115.59 tokens per second) eval time = 4870.71 ms / 47 tokens ( 103.63 ms per token, 9.65 tokens per second) total time = 5770.42 ms / 151 tokens srv update_slots: all slots are idle srv log_server_r: request: POST /v1/chat/completions 192.168.1.128 200 srv params_from_: Chat format: Mistral Nemo slot launch_slot_: id 0 | task 86 | processing task slot update_slots: id 0 | task 86 | new prompt, n_ctx_slot = 16384, n_keep = 0, n_prompt_tokens = 7823 slot update_slots: id 0 | task 86 | kv cache rm [6655, end) slot update_slots: id 0 | task 86 | prompt processing progress, n_past = 7823, n_tokens = 1168, progress = 0.149303 slot update_slots: id 0 | task 86 | prompt done, n_past = 7823, n_tokens = 1168 slot release: id 0 | task 86 | stop processing: n_past = 7869, truncated = 0 slot print_timing: id 0 | task 86 | prompt eval time = 10357.48 ms / 1168 tokens ( 8.87 ms per token, 112.77 tokens per second) eval time = 5054.70 ms / 47 tokens ( 107.55 ms per token, 9.30 tokens per second) total time = 15412.18 ms / 1215 tokens srv update_slots: all slots are idle srv log_server_r: request: POST /v1/chat/completions 192.168.1.128 200 srv params_from_: Chat format: Mistral Nemo slot launch_slot_: id 0 | task 134 | processing task slot update_slots: id 0 | task 134 | new prompt, n_ctx_slot = 16384, n_keep = 0, n_prompt_tokens = 7968 slot update_slots: id 0 | task 134 | kv cache rm [7823, end) slot update_slots: id 0 | task 134 | prompt processing progress, n_past = 7968, n_tokens = 145, progress = 0.018198 slot update_slots: id 0 | task 134 | prompt done, n_past = 7968, n_tokens = 145 slot release: id 0 | task 134 | stop processing: n_past = 8043, truncated = 0 slot print_timing: id 0 | task 134 | prompt eval time = 1245.72 ms / 145 tokens ( 8.59 ms per token, 116.40 tokens per second) eval time = 8228.86 ms / 76 tokens ( 108.27 ms per token, 9.24 tokens per second) total time = 9474.58 ms / 221 tokens srv update_slots: all slots are idle srv log_server_r: request: POST /v1/chat/completions 192.168.1.128 200
Issue Details
Eval bug: Regression: Tool calls still returned in content field as JSON string instead of tool_calls array
Name and Version
Environment
./llama-server --version version: 5894 (494c5899) built with Apple clang version 17.0.0 (clang-1700.0.13.5) for arm64-apple-darwin24.5.0
Server Configuration
export LLAMA_ARG_MODEL="/Users/alhena/server/models/Devstral-Small-2507-UD-Q8_K_XL.gguf" export LLAMA_ARG_PORT="11454" export LLAMA_ARG_CTX_SIZE="16384" export LLAMA_ARG_HOST=0.0.0.0 export LLAMA_ARG_JINJA=1 export LLAMA_ARG_FA=1
Operating systems
Mac
GGML backends
Metal
Hardware
Mac Studio 64GB - 10 CPU cores 32 GPU cores
Models
https://huggingface.co/unsloth/Devstral-Small-2507-GGUF
Problem description & steps to reproduce
Problem Description
Tool calls are intermittently returned as a JSON string in the content
field instead of being properly formatted in the tool_calls
array. This appears to be a regression of issue #12256 which was supposedly fixed by PR #12291.
Observed Behavior
The issue is intermittent - the same server with the same configuration sometimes returns tool calls correctly and sometimes returns them as JSON strings in the content field.
Case 1: Incorrect format (tool calls as JSON string in content)
# From 141523_454_llamacpp_response.yaml timestamp: '2025-07-15T14:15:33.019553' data: choices: - finish_reason: stop index: 0 message: role: assistant content: "{\n \"tool_calls\": [\n {\n \"name\": \"json_set\",\n \"arguments\": {\n \"path\": \"$.game_scores.G02.scores.Sottocastello\",\n \"value\": 45.12\n },\n \"id\": \"74e27f938\"\n }\n ]\n}"
Case 2: Correct format (tool calls properly structured)
# From 145858_878_llamacpp_response.yaml timestamp: '2025-07-15T14:59:14.411130' data: choices: - finish_reason: tool_calls index: 0 message: role: assistant content: null tool_calls: - type: function function: name: json_set arguments: '{"path":"$.game_scores.G02.status","value":"completed"}' id: i011r411j
Note the differences:
- Incorrect:
finish_reason: stop
, tool calls in content field as JSON string - Correct:
finish_reason: tool_calls
, proper tool_calls array,content: null
Request Details
The request includes multiple tool definitions in OpenAI format:
# From 141523_454_llamacpp_request.yaml (truncated) tools: - type: function function: name: json_set description: 'Imposta un valore a un path JSON specifico...' parameters: type: object properties: path: type: string description: JSONPath dove impostare il valore value: type: [string, number, boolean, object, array, 'null'] description: Il valore da impostare required: [path, value] tool_choice: auto
Additional Context
- The issue is intermittent - sometimes tool calls are formatted correctly, other times they appear as JSON strings in content
- Using
LLAMA_ARG_JINJA=true
which should enable proper tool call handling - The model (Devstral) should support tool calling based on the successful parsing of tool call structure
Logs
Full request/response YAML files available showing the complete interaction where this issue occurred.
This regression impacts OpenAI API compatibility and breaks downstream applications expecting properly formatted tool calls.
First Bad Commit
No response
Relevant log output
llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 16384 llama_context: n_ctx_per_seq = 16384 llama_context: n_batch = 2048 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (16384) < n_ctx_train (131072) -- the full capacity of the model will not be utilized ggml_metal_init: allocating ggml_metal_init: found device: Apple M1 Max ggml_metal_init: picking default device: Apple M1 Max ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M1 Max ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = true ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_set_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_c4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f16 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) llama_context: CPU output buffer size = 0.50 MiB llama_kv_cache_unified: Metal KV buffer size = 2560.00 MiB llama_kv_cache_unified: size = 2560.00 MiB ( 16384 cells, 40 layers, 1 seqs), K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility llama_context: Metal compute buffer size = 1092.00 MiB llama_context: CPU compute buffer size = 42.01 MiB llama_context: graph nodes = 1446 llama_context: graph splits = 2 common_init_from_params: setting dry_penalty_last_n to ctx_size = 16384 common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable) srv init: initializing slots, n_slots = 1 slot init: id 0 | task -1 | new slot n_ctx_slot = 16384 main: model loaded main: chat template, chat_template: {#- Copyright 2025-present the Unsloth team. All rights reserved. #} {#- Licensed under the Apache License, Version 2.0 (the "License") #} {#- Edits made by Unsloth #} {%- set default_system_message = 'You are Devstral, a helpful agentic model trained by Mistral AI and using the OpenHands scaffold. You can interact with a computer to solve tasks.\n\n<ROLE>\nYour primary role is to assist users by executing commands, modifying code, and solving technical problems effectively. You should be thorough, methodical, and prioritize quality over speed.\n* If the user asks a question, like \"why is X happening\", don\'t try to fix the problem. Just give an answer to the question.\n</ROLE>\n\n<EFFICIENCY>\n* Each action you take is somewhat expensive. Wherever possible, combine multiple actions into a single action, e.g. combine multiple bash commands into one, using sed and grep to edit/view multiple files at once.\n* When exploring the codebase, use efficient tools like find, grep, and git commands with appropriate filters to minimize unnecessary operations.\n</EFFICIENCY>\n\n<FILE_SYSTEM_GUIDELINES>\n* When a user provides a file path, do NOT assume it\'s relative to the current working directory. First explore the file system to locate the file before working on it.\n* If asked to edit a file, edit the file directly, rather than creating a new file with a different filename.\n* For global search-and-replace operations, consider using `sed` instead of opening file editors multiple times.\n</FILE_SYSTEM_GUIDELINES>\n\n<CODE_QUALITY>\n* Write clean, efficient code with minimal comments. Avoid redundancy in comments: Do not repeat information that can be easily inferred from the code itself.\n* When implementing solutions, focus on making the minimal changes needed to solve the problem.\n* Before implementing any changes, first thoroughly understand the codebase through exploration.\n* If you are adding a lot of code to a function or file, consider splitting the function or file into smaller pieces when appropriate.\n</CODE_QUALITY>\n\n<VERSION_CONTROL>\n* When configuring git credentials, use \"openhands\" as the user.name and \"openhands@all-hands.dev\" as the user.email by default, unless explicitly instructed otherwise.\n* Exercise caution with git operations. Do NOT make potentially dangerous changes (e.g., pushing to main, deleting repositories) unless explicitly asked to do so.\n* When committing changes, use `git status` to see all modified files, and stage all files necessary for the commit. Use `git commit -a` whenever possible.\n* Do NOT commit files that typically shouldn\'t go into version control (e.g., node_modules/, .env files, build directories, cache files, large binaries) unless explicitly instructed by the user.\n* If unsure about committing certain files, check for the presence of .gitignore files or ask the user for clarification.\n</VERSION_CONTROL>\n\n<PULL_REQUESTS>\n* When creating pull requests, create only ONE per session/issue unless explicitly instructed otherwise.\n* When working with an existing PR, update it with new commits rather than creating additional PRs for the same issue.\n* When updating a PR, preserve the original PR title and purpose, updating description only when necessary.\n</PULL_REQUESTS>\n\n<PROBLEM_SOLVING_WORKFLOW>\n1. EXPLORATION: Thoroughly explore relevant files and understand the context before proposing solutions\n2. ANALYSIS: Consider multiple approaches and select the most promising one\n3. TESTING:\n * For bug fixes: Create tests to verify issues before implementing fixes\n * For new features: Consider test-driven development when appropriate\n * If the repository lacks testing infrastructure and implementing tests would require extensive setup, consult with the user before investing time in building testing infrastructure\n * If the environment is not set up to run tests, consult with the user first before investing time to install all dependencies\n4. IMPLEMENTATION: Make focused, minimal changes to address the problem\n5. VERIFICATION: If the environment is set up to run tests, test your implementation thoroughly, including edge cases. If the environment is not set up to run tests, consult with the user first before investing time to run tests.\n</PROBLEM_SOLVING_WORKFLOW>\n\n<SECURITY>\n* Only use GITHUB_TOKEN and other credentials in ways the user has explicitly requested and would expect.\n* Use APIs to work with GitHub or other platforms, unless the user asks otherwise or your task requires browsing.\n</SECURITY>\n\n<ENVIRONMENT_SETUP>\n* When user asks you to run an application, don\'t stop if the application is not installed. Instead, please install the application and run the command again.\n* If you encounter missing dependencies:\n 1. First, look around in the repository for existing dependency files (requirements.txt, pyproject.toml, package.json, Gemfile, etc.)\n 2. If dependency files exist, use them to install all dependencies at once (e.g., `pip install -r requirements.txt`, `npm install`, etc.)\n 3. Only install individual packages directly if no dependency files are found or if only specific packages are needed\n* Similarly, if you encounter missing dependencies for essential tools requested by the user, install them when possible.\n</ENVIRONMENT_SETUP>\n\n<TROUBLESHOOTING>\n* If you\'ve made repeated attempts to solve a problem but tests still fail or the user reports it\'s still broken:\n 1. Step back and reflect on 5-7 different possible sources of the problem\n 2. Assess the likelihood of each possible cause\n 3. Methodically address the most likely causes, starting with the highest probability\n 4. Document your reasoning process\n* When you run into any major issue while executing a plan from the user, please don\'t try to directly work around it. Instead, propose a new plan and confirm with the user before proceeding.\n</TROUBLESHOOTING>' %} {{- bos_token }} {%- if messages[0]['role'] == 'system' %} {%- if messages[0]['content'] is string %} {%- set system_message = messages[0]['content'] %} {%- else %} {%- set system_message = messages[0]['content'][0]['text'] %} {%- endif %} {%- set loop_messages = messages[1:] %} {%- else %} {%- set system_message = default_system_message %} {%- set loop_messages = messages %} {%- endif %} {{- '[SYSTEM_PROMPT]' + system_message + '[/SYSTEM_PROMPT]' }} {#- Tool description appended ONLY to last user message. Edits made by Unsloth #} {#- Tool description appended also if last message is tool. Edits made by Unsloth #} {%- set tools_description = "" %} {%- set has_tools = false %} {%- if tools is defined and tools is not none and tools|length > 0 %} {%- set has_tools = true %} {%- set tools_description = "[AVAILABLE_TOOLS]" + (tools | tojson) + "[/AVAILABLE_TOOLS]" %} {{- tools_description }} {%- endif %} {%- for message in loop_messages %} {%- if message['role'] == 'user' %} {%- if message['content'] is string %} {{- '[INST]' + message['content'] + '[/INST]' }} {%- else %} {{- '[INST]' }} {%- for block in message['content'] %} {%- if block['type'] == 'text' %} {#- Original did not have content which is weird. Added by Un-sloth. #} {%- if block['text'] is defined %} {{- block['text'] }} {%- else %} {{- block['content'] }} {%- endif %} {%- elif block['type'] in ['image', 'image_url'] %} {{- '[IMG]' }} {%- else %} {{- raise_exception('Only text and image blocks are supported in message content!') }} {%- endif %} {%- endfor %} {{- '[/INST]' }} {%- endif %} {%- elif message['role'] == 'system' %} {%- if message['content'] is string %} {{- '[SYSTEM_PROMPT]' + message['content'] + '[/SYSTEM_PROMPT]' }} {%- else %} {{- '[SYSTEM_PROMPT]' + message['content'][0]['text'] + '[/SYSTEM_PROMPT]' }} {%- endif %} {%- elif message['role'] == 'assistant' %} {%- if message['content'] is string %} {{- message['content'] }} {%- else %} {{- message['content'][0]['text'] }} {%- endif %} {#- If User,Assistant,Tool,Tool we also need to append tools_description. Edits made by Unsloth #} {%- if message['tool_calls'] is defined and message['tool_calls'] is not none %} {%- for tool in message['tool_calls'] %} {%- set arguments = tool['function']['arguments'] %} {%- if arguments is not string %} {%- set arguments = arguments|tojson %} {%- endif %} {#- Must list tool calls AFTER assistant. Edits made by Un-sloth #} {{- "[TOOL_CALLS]" + tool['function']['name'] + "[ARGS]" + arguments }} {%- endfor %} {%- endif %} {{- eos_token }} {%- elif message["role"] == "tool_results" or message["role"] == "tool" %} {%- if message.content is defined and message.content.content is defined %} {%- set content = message.content.content %} {%- else %} {%- set content = message.content %} {%- endif %} {{- "[TOOL_RESULTS]" + content|string + "[/TOOL_RESULTS]" }} {%- else %} {{- raise_exception('Only user, systemm assistant and tool roles are supported in the custom template made by Unsloth!') }} {%- endif %} {%- endfor %} {#- Copyright 2025-present the Unsloth team. All rights reserved. #} {#- Licensed under the Apache License, Version 2.0 (the "License") #}, example_format: '[SYSTEM_PROMPT]You are a helpful assistant[/SYSTEM_PROMPT][INST]Hello[/INST]Hi there</s>[INST]How are you?[/INST]' main: server is listening on http://0.0.0.0:11454 - starting the main loop srv update_slots: all slots are idle srv params_from_: Chat format: Mistral Nemo slot launch_slot_: id 0 | task 0 | processing task slot update_slots: id 0 | task 0 | new prompt, n_ctx_slot = 16384, n_keep = 0, n_prompt_tokens = 6551 slot update_slots: id 0 | task 0 | kv cache rm [0, end) slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 2048, n_tokens = 2048, progress = 0.312624 slot update_slots: id 0 | task 0 | kv cache rm [2048, end) slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 4096, n_tokens = 2048, progress = 0.625248 slot update_slots: id 0 | task 0 | kv cache rm [4096, end) slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 6144, n_tokens = 2048, progress = 0.937872 slot update_slots: id 0 | task 0 | kv cache rm [6144, end) slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 6551, n_tokens = 407, progress = 1.000000 slot update_slots: id 0 | task 0 | prompt done, n_past = 6551, n_tokens = 407 slot release: id 0 | task 0 | stop processing: n_past = 6584, truncated = 0 slot print_timing: id 0 | task 0 | prompt eval time = 53959.64 ms / 6551 tokens ( 8.24 ms per token, 121.41 tokens per second) eval time = 3519.75 ms / 34 tokens ( 103.52 ms per token, 9.66 tokens per second) total time = 57479.39 ms / 6585 tokens srv update_slots: all slots are idle srv log_server_r: request: POST /v1/chat/completions 192.168.1.128 200 srv params_from_: Chat format: Mistral Nemo slot launch_slot_: id 0 | task 38 | processing task slot update_slots: id 0 | task 38 | new prompt, n_ctx_slot = 16384, n_keep = 0, n_prompt_tokens = 6655 slot update_slots: id 0 | task 38 | kv cache rm [6551, end) slot update_slots: id 0 | task 38 | prompt processing progress, n_past = 6655, n_tokens = 104, progress = 0.015627 slot update_slots: id 0 | task 38 | prompt done, n_past = 6655, n_tokens = 104 slot release: id 0 | task 38 | stop processing: n_past = 6701, truncated = 0 slot print_timing: id 0 | task 38 | prompt eval time = 899.70 ms / 104 tokens ( 8.65 ms per token, 115.59 tokens per second) eval time = 4870.71 ms / 47 tokens ( 103.63 ms per token, 9.65 tokens per second) total time = 5770.42 ms / 151 tokens srv update_slots: all slots are idle srv log_server_r: request: POST /v1/chat/completions 192.168.1.128 200 srv params_from_: Chat format: Mistral Nemo slot launch_slot_: id 0 | task 86 | processing task slot update_slots: id 0 | task 86 | new prompt, n_ctx_slot = 16384, n_keep = 0, n_prompt_tokens = 7823 slot update_slots: id 0 | task 86 | kv cache rm [6655, end) slot update_slots: id 0 | task 86 | prompt processing progress, n_past = 7823, n_tokens = 1168, progress = 0.149303 slot update_slots: id 0 | task 86 | prompt done, n_past = 7823, n_tokens = 1168 slot release: id 0 | task 86 | stop processing: n_past = 7869, truncated = 0 slot print_timing: id 0 | task 86 | prompt eval time = 10357.48 ms / 1168 tokens ( 8.87 ms per token, 112.77 tokens per second) eval time = 5054.70 ms / 47 tokens ( 107.55 ms per token, 9.30 tokens per second) total time = 15412.18 ms / 1215 tokens srv update_slots: all slots are idle srv log_server_r: request: POST /v1/chat/completions 192.168.1.128 200 srv params_from_: Chat format: Mistral Nemo slot launch_slot_: id 0 | task 134 | processing task slot update_slots: id 0 | task 134 | new prompt, n_ctx_slot = 16384, n_keep = 0, n_prompt_tokens = 7968 slot update_slots: id 0 | task 134 | kv cache rm [7823, end) slot update_slots: id 0 | task 134 | prompt processing progress, n_past = 7968, n_tokens = 145, progress = 0.018198 slot update_slots: id 0 | task 134 | prompt done, n_past = 7968, n_tokens = 145 slot release: id 0 | task 134 | stop processing: n_past = 8043, truncated = 0 slot print_timing: id 0 | task 134 | prompt eval time = 1245.72 ms / 145 tokens ( 8.59 ms per token, 116.40 tokens per second) eval time = 8228.86 ms / 76 tokens ( 108.27 ms per token, 9.24 tokens per second) total time = 9474.58 ms / 221 tokens srv update_slots: all slots are idle srv log_server_r: request: POST /v1/chat/completions 192.168.1.128 200