Ubuntu 22.04 安装 Openclaw 详细教程
准备工作
Openclaw 中文官网:https://clawd.org.cn/
一键安装
curl -fsSL https://clawd.org.cn/install.sh | sudo bash
输入 Y 确认安装。
需要配置 DeepSeek API Key。访问 https://platform.deepseek.com/sign_in 注册并获取 API Key,在终端中输入 key。
设置通道 配置飞书
创建飞书机器人:https://open.feishu.cn
- 进入开发者后台,注册用户并创建 AI 机器人。
- 获取 App ID 和 App Secret。
- 在安装终端中填入 App ID 和 Secret。
- 后续可单独输入命令配置:
openclaw-cn configure --section channels
如有问题,运行 openclaw-cn onboard --install-daemon 重新配置。
若 Gateway 未安装导致无法打开网页端,执行以下命令安装必备工具:
sudo apt install net-tools
让 AI 员工更好用
加入免费的模型
配置 GLM-4.7-Flash 官方免费 API:https://bigmodel.cn/
- 注册开发者账号并获取 API Key。
- 在控制台新建 API Key 并复制。
- 修改
openclaw.json文件,写入模型配置:
"models": {
"providers": {
"glm": {
"baseUrl": "https://open.bigmodel.cn/api/paas/v4",
"apiKey": "你的 apiKey",
"api": "openai-completions",
"models": [
{
"id": "glm-4.7-flash",
"name": "GLM-4.7 Flash",
"contextWindow": 128000,
"maxTokens": 4096,
"reasoning": false,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
}
]
}
}
},
"agents": {
"defaults": {
"model": { "primary": "glm/glm-4.7-flash" },
"maxConcurrent": 4,
"subagents": { "maxConcurrent": 8 }
}
}
重启服务:openclaw-cn gateway restart
配置钉钉
访问 https://open-dev.dingtalk.com 获取 Client ID (AppKey) 和 Client Secret (AppSecret)。 在权限管理中添加:
- Card.Instance.Write
- Card.Streaming.Write
- im:message
安装钉钉插件(默认无内置):
openclaw-cn plugins install https://github.com/soimy/clawdbot-channel-dingtalk.git
在 GLM-4.7-Flash 基础上加入 DeepSeek
当 GLM-4.7 Flash 免费 token 不足时,可自动切换到 DeepSeek。需在 openclaw.json 中加入 DeepSeek API Key 配置,然后重启服务:openclaw-cn gateway restart
加入 MiniMax 和豆包模型
- MiniMax M2.5: 访问 https://api.minimax.chat/ 获取 API Key。
- Seedance 2.0: 访问 https://console.volcengine.com/ark 开通火山方舟服务并创建 API Key。
- 配置
openclaw.json即可接入多个大模型。
配置 Web 搜索
使用 Exa.ai (https://exa.ai/) 进行网络搜索。
- 注册并获取 API Key。
- 告知 Openclaw 配置:"我注册了 Exa.ai,并得到 apikey:xxx,请帮我配置 exa.ai,代替 Brave Search 进行网络搜索。"
- 重启服务:
openclaw-cn gateway restart
.env File
EXA_API_KEY=YOUR_API_KEY
🔌 Exa MCP Server for OpenAI Codex
给 OpenAI Codex 提供实时 Web 搜索、代码上下文和公司研究功能。
运行命令:
codex mcp add exa --url https://mcp.exa.ai/mcp?exaApiKey=YOUR_API_KEY
启用特定工具:
https://mcp.exa.ai/mcp?exaApiKey=YOUR_API_KEY&tools=web_search_exa,get_code_context_exa,people_search_exa
启用所有工具:
https://mcp.exa.ai/mcp?exaApiKey=YOUR_API_KEY&tools=web_search_exa,web_search_advanced_exa,get_code_context_exa,crawling_exa,company_research_exa,people_search_exa,deep_researcher_start,deep_researcher_check
Quick Start
cURL:
curl -X POST 'https://api.exa.ai/search' \
-H 'x-api-key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{ "query": "latest developments in AI safety research", "type": "auto", "num_results": 10, "contents": { "text": { "max_characters": 20000 } } }'
Function Calling / Tool Use
Function calling 允许 AI 代理根据对话上下文动态决定何时搜索网络。
OpenAI Function Calling:
import json
from openai import OpenAI
from exa_py import Exa
openai = OpenAI()
exa = Exa()
tools = [{"type":"function","function":{"name":"exa_search","description":"Search the web for current information.","parameters":{"type":"object","properties":{"query":{"type":"string","description":"Search query"}},"required":["query"]}}}]
def exa_search(query:str)->str:
results = exa.search_and_contents(query,type="auto", num_results=10, text={"max_characters":20000})
return "\n".join([f"{r.title}: {r.url}" for r in results.results])
messages = [{"role":"user","content":"What's the latest in AI safety?"}]
response = openai.chat.completions.create(model="gpt-4o", messages=messages, tools=tools)
if response.choices[0].message.tool_calls:
tool_call = response.choices[0].message.tool_calls[0]
search_results = exa_search(json.loads(tool_call.function.arguments)[])
messages.append(response.choices[].message)
messages.append({:,: tool_call.,: search_results})
final = openai.chat.completions.create(model=, messages=messages)
(final.choices[].message.content)
Anthropic Tool Use:
import anthropic
from exa_py import Exa
client = anthropic.Anthropic()
exa = Exa()
tools = [{"name":"exa_search","description":"Search the web for current information.","input_schema":{"type":"object","properties":{"query":{"type":"string","description":"Search query"}},"required":["query"]}}]
def exa_search(query:str)->str:
results = exa.search_and_contents(query,type="auto", num_results=10, text={"max_characters":20000})
return "\n".join([f"{r.title}: {r.url}" for r in results.results])
messages = [{"role":"user","content":"Latest quantum computing developments?"}]
response = client.messages.create(model="claude-sonnet-4-20250514", max_tokens=4096, tools=tools, messages=messages)
if response.stop_reason == "tool_use":
tool_use = next(b for b in response.content if b.type=="tool_use")
tool_result = exa_search(tool_use.[])
messages.append({:,: response.content})
messages.append({:,:[{:,: tool_use.,: tool_result}]})
final = client.messages.create(model=, max_tokens=, tools=tools, messages=messages)
(final.content[].text)
Search Type Reference
| Type | Best For | Speed | Depth |
|---|---|---|---|
fast | Real-time apps, autocomplete, quick lookups | Fastest | Basic |
auto | Most queries - balanced relevance & speed | Medium | Smart |
deep | Research, enrichment, thorough results | Slow | Deep |
deep-reasoning | Complex research, multi-step reasoning | Slowest | Deepest |
Tip: type="auto" works well for most queries.
Content Configuration
Choose ONE content type per request:
| Type | Config | Best For |
|---|---|---|
| Text | "text": {"max_characters": 20000} | Full content extraction, RAG |
| Highlights | "highlights": {"max_characters": 4000} | Snippets, summaries, lower cost |
Warning: Using text: true can significantly increase token count.
Domain Filtering (Optional)
Example:
{"includeDomains":["arxiv.org","github.com"],"excludeDomains":["pinterest.com"]}
Web Search Tool
{"query":"latest developments in AI safety research","num_results":10,"contents":{"text":{"max_characters":20000}}}
Category Examples
Use category filters to search dedicated indexes.
- People Search (
category: "people"): Find people by role, expertise. - Company Search (
category: "company"): Find companies by industry. - News Search (
category: "news"): News articles. - Research Papers (
category: "research paper"): Academic papers. - Tweet Search (
category: "tweet"): Twitter/X posts.
Content Freshness (maxAgeHours)
Sets maximum acceptable age for cached content.
| Value | Behavior | Best For |
|---|---|---|
| 24 | Use cache if less than 24 hours old | Daily-fresh content |
| 1 | Use cache if less than 1 hour old | Near real-time data |
| 0 | Always livecrawl | Real-time data |
| -1 | Never livecrawl | Maximum speed |
Other Endpoints
| Endpoint | Description |
|---|---|
/contents | Get contents for known URLs |
/answer | Q&A with citations from web search |
Troubleshooting
- Results not relevant? Try
type: "auto"ortype: "deep". Refine query. - Need structured data? Use
type: "deep"withoutputSchema. - Results too slow? Use
type: "fast"or reducenum_results. - No results? Remove filters, simplify query.
Resources
- Docs: https://exa.ai/docs
- Dashboard: https://dashboard.exa.ai
- API Status: https://status.exa.ai


