自己招一个ai员工-Ubuntu22.04安装Openclaw详细教程-小白可直接上手-持续更新中
Ubuntu22.04安装Openclaw
- 准备工作
- 让ai员工更好用
- 加入免费的模型
- 配置钉钉
- 在GLM-4 .7-Flash基础上加入deepseek
- 加入minimax和豆包模型
- 配置web搜索
- 🔌 Exa MCP Server for OpenAI Codex
- Quick Start
- Function Calling / Tool Use
- Search Type Reference
- Content Configuration
- Domain Filtering (Optional)
- Web Search Tool
- Category Examples
- Content Freshness (maxAgeHours)
- Other Endpoints
- Troubleshooting
- Resources
虽然gnuradio我已经弄完了,项目已经做完。但是还需要完整移植一下,并且要修改射频前端,提高带宽,提高频率切换速度。安装openclaw,其实是想让ai帮我实现gnuradio移植、FPGA程序编写、射频以及ARM里面的linux编程。
一句话验证ai是否能帮我提高效率。
准备工作
还是三哥说的,做什么事情都要找到官网:openclaw的中文官网网址如下:
https://clawd.org.cn/
我们就用中文的,也不翻墙。也不搞什么花里胡哨的东西。

一键安装
curl -fsSL https://clawd.org.cn/install.sh | sudo bash


输入Y

## 这里面需要deepseek的apikey
我们到https://platform.deepseek.com/sign_in
注册,并获取apikey

在这里插入图片描述
回到安装终端中,输入key

设置通道 配置飞书
需要创建飞书机器人。
https://open.feishu.cn,进入开发者后台,这里面需要注册用户,并创建ai机器人。

创建好后,就能看到app id 和app secret了。

之后在安装终端中填入app id和secret
在后续也可以单独输入命令
openclaw-cn configure --section channels






如果有问题,运行openclaw-cn onboard --install-daemon,在重新配置一下。
以上gateway没有安装,所以打不开网页端。
执行sudo apt install net-tools 安装必备的工具,之后就可以打开了。

好了,enjoy吧!
让ai员工更好用
加入免费的模型
配置 GLM-4 .7-Flash官方免费API
https://bigmodel.cn/ 注册个开发者,之后获取apikey

进入控制台,新建个apikey,之后复制。
之后在openclaw.json文件中写入:
“models”: {
“providers”: {
“glm”: {
“baseUrl”: “https://open.bigmodel.cn/api/paas/v4”,
“apiKey”: “你的apiKey”,
“api”: “openai-completions”,
“models”: [
{
“id”: “glm-4.7-flash”,
“name”: “GLM-4.7 Flash”,
“contextWindow”: 128000,
“maxTokens”: 4096,
“reasoning”: false,
“input”: [
“text”
],
“cost”: {
“input”: 0,
“output”: 0,
“cacheRead”: 0,
“cacheWrite”: 0
}
}
]
}
}
},
“agents”: {
“defaults”: {
“model”: {
“primary”: “glm/glm-4.7-flash”
},
“maxConcurrent”: 4,
“subagents”: {
“maxConcurrent”: 8
}
}
},
可以重新启动openclaw,执行openclaw-cn gateway restart

配置钉钉
https://open-dev.dingtalk.com 在钉钉开发者平台上拷贝apikey 和secret。
Client ID(即 AppKey)
Client Secret(即 AppSecret)
在“权限管理”中,搜索并添加权限:
Card.Instance.Write(卡片实例写权限)
Card.Streaming.Write(卡片流式写权限)
im:message(消息相关权限,根据需要)
之后安装dingtalk插件
OpenClaw 默认没有内置钉钉插件,需要手动安装社区版本:
openclaw-cn plugins install https://github.com/soimy/clawdbot-channel-dingtalk.git

在GLM-4 .7-Flash基础上加入deepseek
帮我再添加deepseek,我已经有deepseek的apikey了,当GLM-4.7 Flash免费tocken不够了的时候自动切换到deepseek

但是ai员工就卡住了。
原来是openclaw.json文件配置错了,在修改一下,加入apikey就可以了。
重启openclaw-cn gateway restart就可以。


加入minimax和豆包模型
- MiniMax M2.5 API Key获取访问MiniMax平台:https://api.minimax.chat/;
注册/登录账号,直接获取API Key,复制保存备用。 - Seedance2.0 API Key获取访问火山方舟平台:https://console.volcengine.com/ark;
注册/登录火山引擎账号,开通“火山方舟”服务;
在“API Key管理”页面创建Key,复制保存备用。
之后配置openclaw.json
就可以接入多个大模型了。
配置web搜索
https://exa.ai/
这个可以免费注册

Sign In
之后填写信息:


之后我们copy setup prompt,以便后面使用
之后copy一下apikey,把这个apikey告诉openclaw,让它帮你配置就好了。
输入:
我注册了Exa.ai,并得到apikey:b9fxxxxx,请帮我配置exa.ai,代替Brave Search进行网络搜索。
之后重启openclaw-cn gateway restart,之后就可以。
# Exa API Setup Guide ## Your Configuration | Setting | Value | |---------|-------| | Coding Tool | Codex | | Framework | Other | | Use Case | Web search tool | | Search Type | Auto - Balanced relevance and speed (~1 second) | | Content | Full text | **Project Description:** (Not provided) --- ## API Key Setup ### Environment Variable ```bash export EXA_API_KEY="YOUR_API_KEY" .env File
EXA_API_KEY=YOUR_API_KEY 🔌 Exa MCP Server for OpenAI Codex
Give OpenAI Codex real-time web search, code context, and company research with Exa MCP.
Run in terminal:
codex mcp add exa --url https://mcp.exa.ai/mcp?exaApiKey=f2492f04-b9f0-4c09-86a5-1ce6e1b9a24a Tool enablement (optional):
Add a tools= query param to the MCP URL.
Enable specific tools:
https://mcp.exa.ai/mcp?exaApiKey=f2492f04-b9f0-4c09-86a5-1ce6e1b9a24a&tools=web_search_exa,get_code_context_exa,people_search_exa Enable all tools:
https://mcp.exa.ai/mcp?exaApiKey=f2492f04-b9f0-4c09-86a5-1ce6e1b9a24a&tools=web_search_exa,web_search_advanced_exa,get_code_context_exa,crawling_exa,company_research_exa,people_search_exa,deep_researcher_start,deep_researcher_check Your API key:f2492f04-b9f0-4c09-86a5-1ce6e1b9a24a
Manage keys at dashboard.exa.ai/api-keys.
Troubleshooting: if tools don’t appear, restart your MCP client after updating the config.
📖 Full docs: docs.exa.ai/reference/exa-mcp
Quick Start
cURL
curl-X POST 'https://api.exa.ai/search'\-H'x-api-key: YOUR_API_KEY'\-H'Content-Type: application/json'\-d'{ "query": "latest developments in AI safety research", "type": "auto", "num_results": 10, "contents": { "text": { "max_characters": 20000 } } }'Function Calling / Tool Use
Function calling (also known as tool use) allows your AI agent to dynamically decide when to search the web based on the conversation context. Instead of searching on every request, the LLM intelligently determines when real-time information would improve its response—making your agent more efficient and accurate.
Why use function calling with Exa?
- Your agent can ground responses in current, factual information
- Reduces hallucinations by fetching real sources when needed
- Enables multi-step reasoning where the agent searches, analyzes, and responds
📚 Full documentation: https://docs.exa.ai/reference/openai-tool-calling
OpenAI Function Calling
import json from openai import OpenAI from exa_py import Exa openai = OpenAI() exa = Exa() tools =[{"type":"function","function":{"name":"exa_search","description":"Search the web for current information.","parameters":{"type":"object","properties":{"query":{"type":"string","description":"Search query"}},"required":["query"]}}}]defexa_search(query:str)->str: results = exa.search_and_contents(query,type="auto", num_results=10, text={"max_characters":20000})return"\n".join([f"{r.title}: {r.url}"for r in results.results]) messages =[{"role":"user","content":"What's the latest in AI safety?"}] response = openai.chat.completions.create(model="gpt-4o", messages=messages, tools=tools)if response.choices[0].message.tool_calls: tool_call = response.choices[0].message.tool_calls[0] search_results = exa_search(json.loads(tool_call.function.arguments)["query"]) messages.append(response.choices[0].message) messages.append({"role":"tool","tool_call_id": tool_call.id,"content": search_results}) final = openai.chat.completions.create(model="gpt-4o", messages=messages)print(final.choices[0].message.content)Anthropic Tool Use
import anthropic from exa_py import Exa client = anthropic.Anthropic() exa = Exa() tools =[{"name":"exa_search","description":"Search the web for current information.","input_schema":{"type":"object","properties":{"query":{"type":"string","description":"Search query"}},"required":["query"]}}]defexa_search(query:str)->str: results = exa.search_and_contents(query,type="auto", num_results=10, text={"max_characters":20000})return"\n".join([f"{r.title}: {r.url}"for r in results.results]) messages =[{"role":"user","content":"Latest quantum computing developments?"}] response = client.messages.create(model="claude-sonnet-4-20250514", max_tokens=4096, tools=tools, messages=messages)if response.stop_reason =="tool_use": tool_use =next(b for b in response.content if b.type=="tool_use") tool_result = exa_search(tool_use.input["query"]) messages.append({"role":"assistant","content": response.content}) messages.append({"role":"user","content":[{"type":"tool_result","tool_use_id": tool_use.id,"content": tool_result}]}) final = client.messages.create(model="claude-sonnet-4-20250514", max_tokens=4096, tools=tools, messages=messages)print(final.content[0].text)Search Type Reference
| Type | Best For | Speed | Depth |
|---|---|---|---|
fast | Real-time apps, autocomplete, quick lookups | Fastest | Basic |
auto | Most queries - balanced relevance & speed | Medium | Smart |
deep | Research, enrichment, thorough results | Slow | Deep |
deep-reasoning | Complex research, multi-step reasoning | Slowest | Deepest |
Tip:type="auto" works well for most queries. Use type="deep" when you need thorough research results or structured outputs with field-level grounding.
Content Configuration
Choose ONE content type per request (not both):
| Type | Config | Best For |
|---|---|---|
| Text | "text": {"max_characters": 20000} | Full content extraction, RAG |
| Highlights | "highlights": {"max_characters": 4000} | Snippets, summaries, lower cost |
⚠️ Token usage warning: Using text: true (full page text) can significantly increase token count, leading to slower and more expensive LLM calls. To mitigate:
- Add
max_characterslimit:"text": {"max_characters": 10000} - Use
highlightsinstead if you don’t need contiguous text
When to use text vs highlights:
- Text - When you need untruncated, contiguous content (e.g., code snippets, full articles, documentation)
- Highlights - When you need key excerpts and don’t need the full context (e.g., summaries, Q&A, general research)
Domain Filtering (Optional)
Usually not needed - Exa’s neural search finds relevant results without domain restrictions.
When to use:
- Targeting specific authoritative sources
- Excluding low-quality domains from results
Example:
{"includeDomains":["arxiv.org","github.com"],"excludeDomains":["pinterest.com"]}Note:includeDomains and excludeDomains can be used together to include a broad domain while excluding specific subdomains (e.g., "includeDomains": ["vercel.com"], "excludeDomains": ["community.vercel.com"]).
Web Search Tool
{"query":"latest developments in AI safety research","num_results":10,"contents":{"text":{"max_characters":20000}}}Tips:
- Use
type: "auto"for most queries - Great for building search-powered chatbots or agents
- Combine with contents for RAG workflows
Category Examples
Use category filters to search dedicated indexes. Each category returns only that content type.
Note: Categories can be restrictive. If you’re not getting enough results, try searching without a category first, then add one if needed.
People Search (category: "people")
Find people by role, expertise, or what they work on
curl-X POST 'https://api.exa.ai/search'\-H'x-api-key: YOUR_API_KEY'\-H'Content-Type: application/json'\-d'{ "query": "software engineer distributed systems", "category": "people", "type": "auto", "num_results": 10 }'Tips:
- Use SINGULAR form
- Describe what they work on
- No date/text filters supported
Company Search (category: "company")
Find companies by industry, criteria, or attributes
curl-X POST 'https://api.exa.ai/search'\-H'x-api-key: YOUR_API_KEY'\-H'Content-Type: application/json'\-d'{ "query": "AI startup healthcare", "category": "company", "type": "auto", "num_results": 10 }'Tips:
- Use SINGULAR form
- Simple entity queries
- Returns company objects, not articles
News Search (category: "news")
News articles
curl-X POST 'https://api.exa.ai/search'\-H'x-api-key: YOUR_API_KEY'\-H'Content-Type: application/json'\-d'{ "query": "OpenAI announcements", "category": "news", "type": "auto", "num_results": 10, "contents": { "text": { "max_characters": 20000 } } }'Tips:
- Use livecrawl: “preferred” for breaking news
- Avoid date filters unless required
Research Papers (category: "research paper")
Academic papers
curl-X POST 'https://api.exa.ai/search'\-H'x-api-key: YOUR_API_KEY'\-H'Content-Type: application/json'\-d'{ "query": "transformer architecture improvements", "category": "research paper", "type": "auto", "num_results": 10, "contents": { "text": { "max_characters": 20000 } } }'Tips:
- Use type: “auto” for most queries
- Includes arxiv.org, paperswithcode.com, and other academic sources
Tweet Search (category: "tweet")
Twitter/X posts
curl-X POST 'https://api.exa.ai/search'\-H'x-api-key: YOUR_API_KEY'\-H'Content-Type: application/json'\-d'{ "query": "AI safety discussion", "category": "tweet", "type": "auto", "num_results": 10, "contents": { "text": { "max_characters": 20000 } } }'Tips:
- Good for real-time discussions
- Captures public sentiment
Content Freshness (maxAgeHours)
maxAgeHours sets the maximum acceptable age (in hours) for cached content. If the cached version is older than this threshold, Exa will livecrawl the page to get fresh content.
| Value | Behavior | Best For |
|---|---|---|
| 24 | Use cache if less than 24 hours old, otherwise livecrawl | Daily-fresh content |
| 1 | Use cache if less than 1 hour old, otherwise livecrawl | Near real-time data |
| 0 | Always livecrawl (ignore cache entirely) | Real-time data where cached content is unusable |
| -1 | Never livecrawl (cache only) | Maximum speed, historical/static content |
| (omit) | Default behavior (livecrawl as fallback if no cache exists) | Recommended — balanced speed and freshness |
When LiveCrawl Isn’t Necessary:
Cached data is sufficient for many queries, especially for historical topics or educational content. These subjects rarely change, so reliable cached results can provide accurate information quickly.
See maxAgeHours docs for more details.
Other Endpoints
Beyond /search, Exa offers these endpoints:
| Endpoint | Description | Docs |
|---|---|---|
/contents | Get contents for known URLs | Docs |
/answer | Q&A with citations from web search | Docs |
Example - Get contents for URLs:
POST/contents {"urls":["https://example.com/article"],"text":{"max_characters":20000}}Troubleshooting
Results not relevant?
- Try
type: "auto"- most balanced option - Try
type: "deep"- runs multiple query variations and ranks the combined results - Refine query - use singular form, be specific
- Check category matches your use case
Need structured data from search?
- Use
type: "deep"ortype: "deep-reasoning"withoutputSchema - Define the fields you need in the schema — Exa returns grounded JSON with citations
Results too slow?
- Use
type: "fast" - Reduce
num_results - Skip contents if you only need URLs
No results?
- Remove filters (date, domain restrictions)
- Simplify query
- Try
type: "auto"- has fallback mechanisms
Resources
- Docs: https://exa.ai/docs
- Dashboard: https://dashboard.exa.ai
- API Status: https://status.exa.ai