使用 LLaMA-Factory 微调 Qwen2.5 模型,并转换为 GGUF 格式部署

在开源大模型领域,Qwen 系列凭借强大的中文能力和友好的协议受到广泛欢迎。然而,直接使用基座模型往往无法满足特定业务场景的需求,需要通过微调来注入领域知识。微调后的模型如何高效部署?GGUF 格式是目前 llama.cpp 等推理后端广泛支持的格式,具有跨平台、内存映射等优点。本文将完整记录使用 LLaMA-Factory 微调 Qwen2.5-7B-Instruct 模型,并通过 llama.cpp 将微调后的模型转换为 GGUF 格式的全过程,并分享在转换过程中遇到的经典错误及其解决方案。

1.环境准备
我们在一台 Linux 服务器上操作,安装了 Conda 用于环境隔离。需要准备以下组件:
Python 3.10
LLaMA-Factory(用于微调)
llama.cpp(用于格式转换)
transformers、peft、accelerate 等依赖库

1.1 创建 Conda 环境
conda create -n llama_factory python=3.10 -y
conda activate llama_factory
1.2 安装 LLaMA-Factory
LLaMA-Factory 是一个高效的微调框架,支持多种模型和算法。我们通过源码安装:
git clone https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e “.[torch,metrics]”
安装过程中如果遇到依赖冲突,可适当调整 transformers 版本,但建议保持最新。

1.3 安装 llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
pip install -r requirements.txt
注意:转换脚本 convert_hf_to_gguf.py 依赖 transformers,需要保证其版本兼容。

2.使用 LLaMA-Factory 微调 Qwen2.5-7B-Instruct
以 Qwen2.5-7B-Instruct 为基座,使用自定义数据集进行指令微调。假设数据已准备为 JSON 格式,每条包含 instruction 和 output 字段。

2.1 准备数据
将数据集放在 LLaMA-Factory/data 目录下,并创建数据集配置文件 dataset_info.json,示例如下:

{ "my_dataset": { "file_name": "my_dataset.json", "columns": { "prompt": "instruction", "response": "output" } } } 

2.2 配置微调参数
LLaMA-Factory 支持通过命令行或 YAML 文件配置。这里我们使用命令行进行 LoRA 微调:

CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \ --stage sft \ --model_name_or_path Qwen/Qwen2.5-7B-Instruct \ --dataset my_dataset \ --dataset_dir ./data \ --finetuning_type lora \ --lora_target q_proj,v_proj \ --output_dir ./output/qwen2.5-lora \ --overwrite_cache \ --per_device_train_batch_size 4 \ --gradient_accumulation_steps 4 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --save_steps 500 \ --learning_rate 1e-4 \ --num_train_epochs 3 \ --fp16 

训练完成后,微调的 LoRA 权重保存在 ./output/qwen2.5-lora 目录中。

2.3 合并 LoRA 权重(如果需要导出完整模型)
如果希望得到一个完整的 HuggingFace 格式模型(而非仅 LoRA 适配器),可以使用 export_model.py 脚本合并:

python src/export_model.py \ --model_name_or_path Qwen/Qwen2.5-7B-Instruct \ --adapter_name_or_path ./output/qwen2.5-lora \ --template default \ --finetuning_type lora \ --export_dir ./output/qwen2.5-merged 

合并后的完整模型将保存在 ./output/qwen2.5-merged 中,包含所有必要的配置文件、分词器和权重文件。

3.将微调后的模型转换为 GGUF 格式
3.1 准备转换环境
为了转换,我们需要一个独立的 conda 环境,以免与 LLaMA-Factory 的依赖冲突。创建一个新环境并安装必要工具:

conda create -n llama.cpp python=3.10 -y conda activate llama.cpp pip install torch transformers sentencepiece protobuf 

3.2 使用 llama.cpp 的转换脚本
进入 llama.cpp 目录,执行转换命令(假设合并后的模型位于 /mnt/workspace/output/qwen2.5-merged):

cd /path/to/llama.cpp python convert_hf_to_gguf.py /mnt/workspace/output/qwen2.5-merged \ --outtype f16 \ --verbose \ --outfile /mnt/workspace/qwen2.5-7B-instruct.gguf 

3.3 遇到的经典错误及解决
在执行上述命令时,遇到了以下错误:

python llama.cpp/convert_hf_to_gguf.py /mnt/workspace/.cache/modelscope/models/Qwen/Qwen2.5-7B-Instruct-lora --outtype f16 --verbose --outfile /mnt/workspace/Meta-Llama-3-8B-Instruct-gguf.gguf INFO:hf-to-gguf:Loading model: Qwen2.5-7B-Instruct-lora INFO:hf-to-gguf:Model architecture: Qwen2ForCausalLM INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json' INFO:hf-to-gguf:gguf: indexing model part 'model-00001-of-00004.safetensors' INFO:hf-to-gguf:gguf: indexing model part 'model-00002-of-00004.safetensors' INFO:hf-to-gguf:gguf: indexing model part 'model-00003-of-00004.safetensors' INFO:hf-to-gguf:gguf: indexing model part 'model-00004-of-00004.safetensors' INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only INFO:hf-to-gguf:Exporting model... INFO:hf-to-gguf:output.weight, torch.bfloat16 --> F16, shape = {3584, 152064} INFO:hf-to-gguf:token_embd.weight, torch.bfloat16 --> F16, shape = {3584, 152064} INFO:hf-to-gguf:blk.0.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.0.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.0.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.0.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.0.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.0.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.0.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.0.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.0.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.0.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.0.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.0.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.1.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.1.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.1.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.1.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.1.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.1.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.1.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.1.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.1.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.1.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.1.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.1.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.2.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.2.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.2.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.2.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.2.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.2.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.2.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.2.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.2.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.2.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.2.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.2.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.3.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.3.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.3.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.3.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.3.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.3.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.3.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.3.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.3.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.3.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.3.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.3.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.4.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.4.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.4.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.4.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.4.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.4.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.4.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.4.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.4.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.4.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.4.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.4.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.5.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.5.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.5.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.5.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.5.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.5.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.5.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.5.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.5.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.5.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.5.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.5.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.6.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.10.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.10.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.10.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.10.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.10.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.10.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.10.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.10.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.10.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.10.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.10.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.10.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.11.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.11.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.11.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.11.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.11.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.11.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.11.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.11.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.11.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.11.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.11.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.11.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.12.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.12.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.12.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.12.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.12.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.12.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.12.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.12.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.12.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.12.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.12.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.12.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.13.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.13.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.13.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.13.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.13.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.13.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.13.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.13.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.13.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.13.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.13.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.13.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.14.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.14.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.14.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.14.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.14.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.14.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.14.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.14.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.14.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.14.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.14.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.14.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.15.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.15.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.15.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.15.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.15.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.15.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.15.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.15.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.15.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.15.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.15.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.15.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.16.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.16.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.16.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.6.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.6.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.6.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.6.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.6.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.6.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.6.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.6.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.6.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.6.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.6.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.7.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.7.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.7.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.7.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.7.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.7.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.7.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.7.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.7.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.7.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.7.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.7.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.8.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.8.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.8.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.8.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.8.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.8.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.8.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.8.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.8.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.8.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.8.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.8.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.9.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.9.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.9.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.9.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.9.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.9.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.9.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.9.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.9.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.9.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.9.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.9.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.16.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.16.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.16.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.16.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.16.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.16.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.16.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.16.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.16.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.17.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.17.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.17.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.17.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.17.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.17.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.17.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.17.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.17.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.17.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.17.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.17.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.18.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.18.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.18.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.18.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.18.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.18.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.18.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.18.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.18.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.18.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.18.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.18.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.19.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.19.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.19.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.19.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.19.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.19.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.19.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.19.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.19.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.19.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.19.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.19.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.20.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.20.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.20.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.20.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.20.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.20.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.20.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.20.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.20.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.20.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.20.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.20.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.21.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.21.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.21.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.21.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.21.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.21.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.21.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.21.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.21.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.21.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.21.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.21.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.22.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.22.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.22.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.22.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.22.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.22.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.22.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.22.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.22.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.22.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.22.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.22.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.23.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.23.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.23.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.23.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.23.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.23.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.23.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.23.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.23.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.23.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.23.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.23.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.24.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.24.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.24.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.24.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.24.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.24.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.24.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.24.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.24.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.24.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.24.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.24.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.25.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.25.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.25.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.25.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.25.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.25.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.25.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.25.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.25.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.25.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.25.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.25.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.26.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.26.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.26.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.26.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.26.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.26.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.26.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.26.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.26.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.26.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.26.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.26.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.27.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.27.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.27.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.27.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.27.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.27.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.27.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.27.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.27.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.27.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.27.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.27.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:output_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:Set meta model INFO:hf-to-gguf:Set model parameters INFO:hf-to-gguf:gguf: context length = 32768 INFO:hf-to-gguf:gguf: embedding length = 3584 INFO:hf-to-gguf:gguf: feed forward length = 18944 INFO:hf-to-gguf:gguf: head count = 28 INFO:hf-to-gguf:gguf: key-value head count = 4 WARNING:hf-to-gguf:Unknown RoPE type: default INFO:hf-to-gguf:gguf: rope scaling type = NONE INFO:hf-to-gguf:gguf: rope theta = 1000000.0 INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-06 INFO:hf-to-gguf:gguf: file type = 1 INFO:hf-to-gguf:Set model quantization version INFO:hf-to-gguf:Set model tokenizer Traceback (most recent call last): File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 3534, in set_vocab self._set_vocab_sentencepiece() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 1358, in _set_vocab_sentencepiece tokens, scores, toktypes = self._create_vocab_sentencepiece() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 1375, in _create_vocab_sentencepiece raise FileNotFoundError(f"File not found: {tokenizer_path}") FileNotFoundError: File not found: /mnt/workspace/.cache/modelscope/models/Qwen/Qwen2.5-7B-Instruct-lora/tokenizer.model During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 11934, in <module> main() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 11928, in main model_instance.write() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 689, in write self.prepare_metadata(vocab_only=False) File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 830, in prepare_metadata self.set_vocab() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 3536, in set_vocab self._set_vocab_gpt2() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 1294, in _set_vocab_gpt2 tokens, toktypes, tokpre = self.get_vocab_base() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 978, in get_vocab_base tokenizer = AutoTokenizer.from_pretrained(self.dir_model) File "/root/miniconda3/envs/llama.cpp/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 814, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/root/miniconda3/envs/llama.cpp/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2029, in from_pretrained return cls._from_pretrained( File "/root/miniconda3/envs/llama.cpp/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2261, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/root/miniconda3/envs/llama.cpp/lib/python3.10/site-packages/transformers/models/qwen2/tokenization_qwen2_fast.py", line 129, in __init__ super().__init__( File "/root/miniconda3/envs/llama.cpp/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 111, in __init__ fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file) Exception: data did not match any variant of untagged enum ModelWrapper at line 757443 column 3 (llama.cpp) root@dsw-1662938-774cbc5758-kd9bv:/mnt/workspace# 

错误原因分析
该错误发生在转换脚本加载 tokenizer.json 文件时,提示 JSON 结构不符合预期。通常由两个原因引起:

tokenizer.json 文件损坏:可能是下载不完整或微调过程中被意外修改。

transformers 版本不兼容:某些较新的 tokenizer.json 格式需要特定版本的 transformers 才能正确解析。

解决方案
经过排查,发现是 transformers 版本问题。当前环境中的 transformers 为旧版(如 4.36.0),而 Qwen2.5 的 tokenizer 需要更新版本支持。我们通过强制安装 transformers==4.45.0 解决了问题:

bash
pip install --force-reinstall transformers==4.45.0
重新运行转换命令,成功导出 GGUF 文件!

注意:如果模型目录中的 tokenizer.json 确实损坏,可以从 HuggingFace 官方仓库重新下载覆盖。

3.4 验证转换结果
转换完成后,检查输出文件:

ls -lh /mnt/workspace/qwen2.5-7B-instruct.gguf
可以使用 llama.cpp 提供的简单测试工具验证模型加载:

./main -m /mnt/workspace/qwen2.5-7B-instruct.gguf -p “你好,请介绍一下你自己。” -n 100
如果正常输出,说明转换成功。

4.总结
通过本文的实践,完成了以下工作:
使用 LLaMA-Factory 对 Qwen2.5-7B-Instruct 进行了 LoRA 微调,并合并为完整模型。
利用 llama.cpp 的转换工具将微调后的模型转换为 GGUF 格式,以便高效部署。
解决了转换过程中遇到的 tokenizer.json 解析错误,关键在于确保 transformers 版本与模型兼容。

关键点总结:
版本兼容性:转换脚本对 transformers 版本敏感,建议使用较新稳定版(如 4.45.0)。
文件完整性:微调后务必检查 tokenizer.json 是否完好,必要时从官方源补充。
路径命名:转换命令中的输出文件名建议与模型对应,避免混淆(如本文示例中应避免误写成 Llama 相关名称)。
GGUF 格式的模型可以轻松在 llama.cpp、Ollama、LM Studio 等推理后端运行,极大地方便了本地部署。

Read more

万方AIGC检测通不过?这几款降AI工具实测有效

万方AIGC检测通不过?这几款降AI工具实测有效

万方AIGC检测通不过?这几款降AI工具实测有效 TL;DR:万方AIGC检测算法与知网、维普不同,需要选择支持万方平台的降AI工具。推荐嘎嘎降AI(多平台适配,4.8元/千字)和率降(稳定可靠,4.2元/千字)。 万方检测的特殊性 很多同学以为降AI工具都是通用的,用一个就能搞定所有平台。但实际上,知网、维普、万方三大平台的AIGC检测算法各有不同。我之前用一款只针对知网优化的工具处理论文,知网检测降到了8%,但万方一测还有32%,差点没过学校的检测线。 万方的AIGC检测更侧重于文本特征分析,对某些AI生成模式的识别与知网有差异。所以如果你学校用的是万方检测,一定要确认工具是否支持万方平台,别只看知网的效果数据。 支持万方的降AI工具对比 工具价格(千字)万方效果达标率特色链接嘎嘎降AI4.8元60%→8%99.26%多平台适配官网率降4.2元65%→12%97%稳定可靠官网去AIGC3.5元70%→18%96%通用型官网比话降AI8元知网专精99%

亲测Z-Image-ComfyUI:AI绘画中文提示词效果惊艳

亲测Z-Image-ComfyUI:AI绘画中文提示词效果惊艳 最近在本地部署了阿里新开源的 Z-Image-ComfyUI 镜像,连续测试了三天,从“试试看”到“真香”,再到“这中文理解也太准了吧”,整个过程像拆开一个层层惊喜的盲盒。最让我意外的不是它出图快、显存占用低,而是——输入一句大白话中文,它真的能听懂、记得住、画得准。 过去用 Stable Diffusion 系列模型时,中文提示词总像隔着一层毛玻璃:写“水墨风山水画”,结果冒出半张人脸;写“穿旗袍的女士坐在苏州园林亭子里”,人物站姿歪斜、亭子比例失真、连“苏州”两个字都可能被误读成“苏洲”。而 Z-Image-Turbo 在同一台 RTX 4090(16G 显存)上跑起来,不仅生成速度肉眼可见地快,更关键的是——它对中文语义的理解,是真正“语义级”的,

架构设计模式:Clean Architecture实践

架构设计模式:Clean Architecture实践 一、Clean Architecture概述 1.1 什么是Clean Architecture Clean Architecture(简称CA)是由Robert C. Martin(Uncle Bob)提出的一种软件架构模式,旨在创建一个独立于框架、UI、数据库和任何外部代理的系统。它通过分离关注点来实现高度可测试、可维护和可扩展的代码库。 在Flutter应用开发中,Clean Architecture的核心价值在于: * 独立于框架:核心业务逻辑不依赖于Flutter框架,使代码更易于迁移和重用 * 可测试性:业务规则可以在没有UI、数据库或任何外部元素的情况下进行测试 * 独立于UI:UI可以轻松更改,而不影响系统的其余部分 * 独立于数据库:业务规则不绑定到特定的数据库实现 * 独立于任何外部代理:业务规则不知道外部世界的任何信息 1.2 Clean Architecture的核心原则 Clean Architecture基于以下几个核心原则: 1. 依赖规则:源代码依赖只能指向内层,内层不

【选型】地瓜机器人RDK系列选型指南:X3 vs X5 vs S100 vs S100P(含资源对比图)

【选型】地瓜机器人RDK系列选型指南:X3 vs X5 vs S100 vs S100P(含资源对比图)

在机器人开发领域,地瓜机器人(D-Robotics)凭借其“RDK(Robot Developer Kit)”系列开发套件,已成为众多开发者和创业团队的首选平台。从轻量级边缘计算到高性能具身智能,地瓜机器人已构建了覆盖多场景的完整产品线,致力于为开发者提供高性价比、高集成度、高扩展性的解决方案。其核心芯片“旭日®”系列持续迭代,推动AI与机器人深度融合,助力实现从感知到控制的全链路自主化。 本文将深入对比当前主流的四款RDK开发套件:RDK X3、RDK X5、RDK S100、RDK S100P,并提供详细的资源对比图与应用场景分析,帮助你快速完成技术选型,降低开发门槛,提升项目落地效率。 一、产品定位概览 在深入参数前,先明确每款产品的核心定位,以便根据项目阶段、预算和性能需求做出合理选择。 ● RDK X3:轻量级边缘AI计算模组,适合入门级机器人、智能摄像头、无人机等低功耗、小体积场景。是初学者和教育项目的理想起点,具备基础AI推理能力,可快速搭建视觉识别系统。 ● RDK