Android Termux 安装 llama.cpp 并启动 WebUI
llama.cpp 没有发布官方 aarch64 的二进制,需要自己编译,好在 Termux 已经有编译好的包可用。
1. 安装 llama-cpp 软件
在 Termux 中执行以下命令安装:
~ $ apt update
Get:1 https://mirrors.tuna.tsinghua.edu.cn/termux/apt/termux-main stable InRelease [14.0 kB]
...
Fetched 556 kB in 1s (425 kB/s)
Reading package lists... Done
~ $ apt install llama-cpp
The following additional packages will be installed:
libandroid-spawn
Suggested packages: llama-cpp-backend-vulkan llama-cpp-backend-opencl
The following NEW packages will be installed:
libandroid-spawn llama-cpp
...
Setting up llama-cpp (0.0.0-b8184-0) ...
如果找不到这个包,就先执行 apt update 更新目录。为简单起见,先不安装 llama-cpp-backend-vulkan,用 CPU 来执行 llama.cpp。
2. 下载模型文件
~/model $ mkdir model
~/model $ cd model
~/model $ curl -LO https://hf-mirror.com/unsloth/Qwen3.5-0.8B-GGUF/resolve/main/Qwen3.5-0.8B-UD-Q4_K_XL.gguf
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed
100 532M 100 532M 0 0 4147k 0 0:02:11 0:02:11 --:--:-- 5141k
该模型是 Q4 量化的,比原版减少了一半空间,而能力差不多。
3. 命令行交互测试
使用 llama-cli 工具加载模型并对话:
~/model $ llama-cli -m Qwen3.5-0.8B-UD-Q4_K_XL.gguf --ctx-size 16384 -cnv
load_backend: loaded CPU backend from /data/data/com.termux/files/usr/bin/../lib/libggml-cpu.so
Loading model...
build : b0-unknown
model : Qwen3.5-0.8B-UD-Q4_K_XL.gguf
modalities : text
available commands:
/exit or Ctrl+C stop or exit
/regen regenerate the last response
/clear clear the chat history
/read add a text file
输入问题后等待输出,由于模型较小,智能有限,但可验证功能正常。
退出交互:
> /exit
Exiting...
llama_memory_breakdown_print: | memory breakdown [MiB] | total free self model context compute unaccounted |
llama_memory_breakdown_print: | - Host | 1222 = 522 + 211 + 489 |
4. 启动内置 WebUI
利用 llama-server 启动服务:
~/model $ llama-server -m ./Qwen3.5-0.8B-UD-Q4_K_XL.gguf --jinja -c 0 --host 127.0.0.1 --port 8033
load_backend: loaded CPU backend from /data/data/com.termux/files/usr/bin/../lib/libggml-cpu.so
main: n_parallel is to auto, using n_parallel = 4 and kv_unified =
build: 0 (unknown) with Clang 21.0.0 Android aarch64
system info: n_threads = 8, n_threads_batch = 8, total_threads = 8
...
srv load_model: loading model
...
main: server is listening on http://127.0.0.1:8033
main: starting the main loop...

