自己招一个ai员工-Ubuntu22.04安装Openclaw详细教程-小白可直接上手-持续更新中

自己招一个ai员工-Ubuntu22.04安装Openclaw详细教程-小白可直接上手-持续更新中

Ubuntu22.04安装Openclaw

虽然gnuradio我已经弄完了,项目已经做完。但是还需要完整移植一下,并且要修改射频前端,提高带宽,提高频率切换速度。安装openclaw,其实是想让ai帮我实现gnuradio移植、FPGA程序编写、射频以及ARM里面的linux编程。
一句话验证ai是否能帮我提高效率。

准备工作

还是三哥说的,做什么事情都要找到官网:openclaw的中文官网网址如下:
https://clawd.org.cn/
我们就用中文的,也不翻墙。也不搞什么花里胡哨的东西。

在这里插入图片描述

一键安装

curl -fsSL https://clawd.org.cn/install.sh | sudo bash

在这里插入图片描述


在这里插入图片描述


输入Y

在这里插入图片描述

## 这里面需要deepseek的apikey
我们到https://platform.deepseek.com/sign_in
注册,并获取apikey

在这里插入图片描述


在这里插入图片描述
回到安装终端中,输入key

在这里插入图片描述

设置通道 配置飞书

需要创建飞书机器人。
https://open.feishu.cn,进入开发者后台,这里面需要注册用户,并创建ai机器人。

在这里插入图片描述


创建好后,就能看到app id 和app secret了。

在这里插入图片描述


之后在安装终端中填入app id和secret
在后续也可以单独输入命令
openclaw-cn configure --section channels

在这里插入图片描述
在这里插入图片描述


在这里插入图片描述
在这里插入图片描述


在这里插入图片描述


在这里插入图片描述


如果有问题,运行openclaw-cn onboard --install-daemon,在重新配置一下。
以上gateway没有安装,所以打不开网页端。
执行sudo apt install net-tools 安装必备的工具,之后就可以打开了。

在这里插入图片描述


好了,enjoy吧!

让ai员工更好用

加入免费的模型

配置 GLM-4 .7-Flash官方免费API
https://bigmodel.cn/ 注册个开发者,之后获取apikey

在这里插入图片描述


进入控制台,新建个apikey,之后复制。
之后在openclaw.json文件中写入:
“models”: {
“providers”: {
“glm”: {
“baseUrl”: “https://open.bigmodel.cn/api/paas/v4”,
“apiKey”: “你的apiKey”,
“api”: “openai-completions”,
“models”: [
{
“id”: “glm-4.7-flash”,
“name”: “GLM-4.7 Flash”,
“contextWindow”: 128000,
“maxTokens”: 4096,
“reasoning”: false,
“input”: [
“text”
],
“cost”: {
“input”: 0,
“output”: 0,
“cacheRead”: 0,
“cacheWrite”: 0
}
}
]
}
}
},
“agents”: {
“defaults”: {
“model”: {
“primary”: “glm/glm-4.7-flash”
},
“maxConcurrent”: 4,
“subagents”: {
“maxConcurrent”: 8
}
}
},

可以重新启动openclaw,执行openclaw-cn gateway restart

在这里插入图片描述

配置钉钉

https://open-dev.dingtalk.com 在钉钉开发者平台上拷贝apikey 和secret。
Client ID(即 AppKey)
Client Secret(即 AppSecret)
在“权限管理”中,搜索并添加权限:
Card.Instance.Write(卡片实例写权限)
Card.Streaming.Write(卡片流式写权限)
im:message(消息相关权限,根据需要)
之后安装dingtalk插件
OpenClaw 默认没有内置钉钉插件,需要手动安装社区版本:
openclaw-cn plugins install https://github.com/soimy/clawdbot-channel-dingtalk.git

在这里插入图片描述

在GLM-4 .7-Flash基础上加入deepseek

帮我再添加deepseek,我已经有deepseek的apikey了,当GLM-4.7 Flash免费tocken不够了的时候自动切换到deepseek

在这里插入图片描述


但是ai员工就卡住了。
原来是openclaw.json文件配置错了,在修改一下,加入apikey就可以了。
重启openclaw-cn gateway restart就可以。

在这里插入图片描述


在这里插入图片描述

加入minimax和豆包模型

  1. MiniMax M2.5 API Key获取访问MiniMax平台:https://api.minimax.chat/;
    注册/登录账号,直接获取API Key,复制保存备用。
  2. Seedance2.0 API Key获取访问火山方舟平台:https://console.volcengine.com/ark;
    注册/登录火山引擎账号,开通“火山方舟”服务;
    在“API Key管理”页面创建Key,复制保存备用。
    之后配置openclaw.json
    就可以接入多个大模型了。

配置web搜索

https://exa.ai/
这个可以免费注册

在这里插入图片描述


Sign In
之后填写信息:

在这里插入图片描述
在这里插入图片描述


之后我们copy setup prompt,以便后面使用
之后copy一下apikey,把这个apikey告诉openclaw,让它帮你配置就好了。
输入:
我注册了Exa.ai,并得到apikey:b9fxxxxx,请帮我配置exa.ai,代替Brave Search进行网络搜索。
之后重启openclaw-cn gateway restart,之后就可以。

# Exa API Setup Guide ## Your Configuration | Setting | Value | |---------|-------| | Coding Tool | Codex | | Framework | Other | | Use Case | Web search tool | | Search Type | Auto - Balanced relevance and speed (~1 second) | | Content | Full text | **Project Description:** (Not provided) --- ## API Key Setup ### Environment Variable ```bash export EXA_API_KEY="YOUR_API_KEY" 

.env File

EXA_API_KEY=YOUR_API_KEY 

🔌 Exa MCP Server for OpenAI Codex

Give OpenAI Codex real-time web search, code context, and company research with Exa MCP.

Run in terminal:

codex mcp add exa --url https://mcp.exa.ai/mcp?exaApiKey=f2492f04-b9f0-4c09-86a5-1ce6e1b9a24a 

Tool enablement (optional):
Add a tools= query param to the MCP URL.

Enable specific tools:

https://mcp.exa.ai/mcp?exaApiKey=f2492f04-b9f0-4c09-86a5-1ce6e1b9a24a&tools=web_search_exa,get_code_context_exa,people_search_exa 

Enable all tools:

https://mcp.exa.ai/mcp?exaApiKey=f2492f04-b9f0-4c09-86a5-1ce6e1b9a24a&tools=web_search_exa,web_search_advanced_exa,get_code_context_exa,crawling_exa,company_research_exa,people_search_exa,deep_researcher_start,deep_researcher_check 

Your API key:f2492f04-b9f0-4c09-86a5-1ce6e1b9a24a
Manage keys at dashboard.exa.ai/api-keys.

Troubleshooting: if tools don’t appear, restart your MCP client after updating the config.

📖 Full docs: docs.exa.ai/reference/exa-mcp


Quick Start

cURL

curl-X POST 'https://api.exa.ai/search'\-H'x-api-key: YOUR_API_KEY'\-H'Content-Type: application/json'\-d'{ "query": "latest developments in AI safety research", "type": "auto", "num_results": 10, "contents": { "text": { "max_characters": 20000 } } }'

Function Calling / Tool Use

Function calling (also known as tool use) allows your AI agent to dynamically decide when to search the web based on the conversation context. Instead of searching on every request, the LLM intelligently determines when real-time information would improve its response—making your agent more efficient and accurate.

Why use function calling with Exa?

  • Your agent can ground responses in current, factual information
  • Reduces hallucinations by fetching real sources when needed
  • Enables multi-step reasoning where the agent searches, analyzes, and responds

📚 Full documentation: https://docs.exa.ai/reference/openai-tool-calling

OpenAI Function Calling

import json from openai import OpenAI from exa_py import Exa openai = OpenAI() exa = Exa() tools =[{"type":"function","function":{"name":"exa_search","description":"Search the web for current information.","parameters":{"type":"object","properties":{"query":{"type":"string","description":"Search query"}},"required":["query"]}}}]defexa_search(query:str)->str: results = exa.search_and_contents(query,type="auto", num_results=10, text={"max_characters":20000})return"\n".join([f"{r.title}: {r.url}"for r in results.results]) messages =[{"role":"user","content":"What's the latest in AI safety?"}] response = openai.chat.completions.create(model="gpt-4o", messages=messages, tools=tools)if response.choices[0].message.tool_calls: tool_call = response.choices[0].message.tool_calls[0] search_results = exa_search(json.loads(tool_call.function.arguments)["query"]) messages.append(response.choices[0].message) messages.append({"role":"tool","tool_call_id": tool_call.id,"content": search_results}) final = openai.chat.completions.create(model="gpt-4o", messages=messages)print(final.choices[0].message.content)

Anthropic Tool Use

import anthropic from exa_py import Exa client = anthropic.Anthropic() exa = Exa() tools =[{"name":"exa_search","description":"Search the web for current information.","input_schema":{"type":"object","properties":{"query":{"type":"string","description":"Search query"}},"required":["query"]}}]defexa_search(query:str)->str: results = exa.search_and_contents(query,type="auto", num_results=10, text={"max_characters":20000})return"\n".join([f"{r.title}: {r.url}"for r in results.results]) messages =[{"role":"user","content":"Latest quantum computing developments?"}] response = client.messages.create(model="claude-sonnet-4-20250514", max_tokens=4096, tools=tools, messages=messages)if response.stop_reason =="tool_use": tool_use =next(b for b in response.content if b.type=="tool_use") tool_result = exa_search(tool_use.input["query"]) messages.append({"role":"assistant","content": response.content}) messages.append({"role":"user","content":[{"type":"tool_result","tool_use_id": tool_use.id,"content": tool_result}]}) final = client.messages.create(model="claude-sonnet-4-20250514", max_tokens=4096, tools=tools, messages=messages)print(final.content[0].text)

Search Type Reference

TypeBest ForSpeedDepth
fastReal-time apps, autocomplete, quick lookupsFastestBasic
autoMost queries - balanced relevance & speedMediumSmart
deepResearch, enrichment, thorough resultsSlowDeep
deep-reasoningComplex research, multi-step reasoningSlowestDeepest

Tip:type="auto" works well for most queries. Use type="deep" when you need thorough research results or structured outputs with field-level grounding.


Content Configuration

Choose ONE content type per request (not both):

TypeConfigBest For
Text"text": {"max_characters": 20000}Full content extraction, RAG
Highlights"highlights": {"max_characters": 4000}Snippets, summaries, lower cost

⚠️ Token usage warning: Using text: true (full page text) can significantly increase token count, leading to slower and more expensive LLM calls. To mitigate:

  • Add max_characters limit: "text": {"max_characters": 10000}
  • Use highlights instead if you don’t need contiguous text

When to use text vs highlights:

  • Text - When you need untruncated, contiguous content (e.g., code snippets, full articles, documentation)
  • Highlights - When you need key excerpts and don’t need the full context (e.g., summaries, Q&A, general research)

Domain Filtering (Optional)

Usually not needed - Exa’s neural search finds relevant results without domain restrictions.

When to use:

  • Targeting specific authoritative sources
  • Excluding low-quality domains from results

Example:

{"includeDomains":["arxiv.org","github.com"],"excludeDomains":["pinterest.com"]}

Note:includeDomains and excludeDomains can be used together to include a broad domain while excluding specific subdomains (e.g., "includeDomains": ["vercel.com"], "excludeDomains": ["community.vercel.com"]).


Web Search Tool

{"query":"latest developments in AI safety research","num_results":10,"contents":{"text":{"max_characters":20000}}}

Tips:

  • Use type: "auto" for most queries
  • Great for building search-powered chatbots or agents
  • Combine with contents for RAG workflows

Category Examples

Use category filters to search dedicated indexes. Each category returns only that content type.

Note: Categories can be restrictive. If you’re not getting enough results, try searching without a category first, then add one if needed.

People Search (category: "people")

Find people by role, expertise, or what they work on

curl-X POST 'https://api.exa.ai/search'\-H'x-api-key: YOUR_API_KEY'\-H'Content-Type: application/json'\-d'{ "query": "software engineer distributed systems", "category": "people", "type": "auto", "num_results": 10 }'

Tips:

  • Use SINGULAR form
  • Describe what they work on
  • No date/text filters supported

Company Search (category: "company")

Find companies by industry, criteria, or attributes

curl-X POST 'https://api.exa.ai/search'\-H'x-api-key: YOUR_API_KEY'\-H'Content-Type: application/json'\-d'{ "query": "AI startup healthcare", "category": "company", "type": "auto", "num_results": 10 }'

Tips:

  • Use SINGULAR form
  • Simple entity queries
  • Returns company objects, not articles

News Search (category: "news")

News articles

curl-X POST 'https://api.exa.ai/search'\-H'x-api-key: YOUR_API_KEY'\-H'Content-Type: application/json'\-d'{ "query": "OpenAI announcements", "category": "news", "type": "auto", "num_results": 10, "contents": { "text": { "max_characters": 20000 } } }'

Tips:

  • Use livecrawl: “preferred” for breaking news
  • Avoid date filters unless required

Research Papers (category: "research paper")

Academic papers

curl-X POST 'https://api.exa.ai/search'\-H'x-api-key: YOUR_API_KEY'\-H'Content-Type: application/json'\-d'{ "query": "transformer architecture improvements", "category": "research paper", "type": "auto", "num_results": 10, "contents": { "text": { "max_characters": 20000 } } }'

Tips:

  • Use type: “auto” for most queries
  • Includes arxiv.org, paperswithcode.com, and other academic sources

Tweet Search (category: "tweet")

Twitter/X posts

curl-X POST 'https://api.exa.ai/search'\-H'x-api-key: YOUR_API_KEY'\-H'Content-Type: application/json'\-d'{ "query": "AI safety discussion", "category": "tweet", "type": "auto", "num_results": 10, "contents": { "text": { "max_characters": 20000 } } }'

Tips:

  • Good for real-time discussions
  • Captures public sentiment

Content Freshness (maxAgeHours)

maxAgeHours sets the maximum acceptable age (in hours) for cached content. If the cached version is older than this threshold, Exa will livecrawl the page to get fresh content.

ValueBehaviorBest For
24Use cache if less than 24 hours old, otherwise livecrawlDaily-fresh content
1Use cache if less than 1 hour old, otherwise livecrawlNear real-time data
0Always livecrawl (ignore cache entirely)Real-time data where cached content is unusable
-1Never livecrawl (cache only)Maximum speed, historical/static content
(omit)Default behavior (livecrawl as fallback if no cache exists)Recommended — balanced speed and freshness

When LiveCrawl Isn’t Necessary:
Cached data is sufficient for many queries, especially for historical topics or educational content. These subjects rarely change, so reliable cached results can provide accurate information quickly.

See maxAgeHours docs for more details.


Other Endpoints

Beyond /search, Exa offers these endpoints:

EndpointDescriptionDocs
/contentsGet contents for known URLsDocs
/answerQ&A with citations from web searchDocs

Example - Get contents for URLs:

POST/contents {"urls":["https://example.com/article"],"text":{"max_characters":20000}}

Troubleshooting

Results not relevant?

  1. Try type: "auto" - most balanced option
  2. Try type: "deep" - runs multiple query variations and ranks the combined results
  3. Refine query - use singular form, be specific
  4. Check category matches your use case

Need structured data from search?

  1. Use type: "deep" or type: "deep-reasoning" with outputSchema
  2. Define the fields you need in the schema — Exa returns grounded JSON with citations

Results too slow?

  1. Use type: "fast"
  2. Reduce num_results
  3. Skip contents if you only need URLs

No results?

  1. Remove filters (date, domain restrictions)
  2. Simplify query
  3. Try type: "auto" - has fallback mechanisms

Resources

  • Docs: https://exa.ai/docs
  • Dashboard: https://dashboard.exa.ai
  • API Status: https://status.exa.ai

Read more

前端数据库 IndexedDB 详解:构建强大的离线Web应用

前端数据库 IndexedDB 详解:构建强大的离线Web应用 * 引言:为什么需要前端数据库? * IndexedDB核心概念解析 * 1. 数据库(Database) * 2. 对象存储(Object Store) * 3. 索引(Index) * 4. 事务(Transaction) * 5. 游标(Cursor) * 完整代码示例:实现一个联系人管理器 * 1. 初始化数据库 * 2. 添加联系人 * 3. 查询联系人 * 通过ID查询 * 通过索引查询 * 4. 更新联系人 * 5. 删除联系人 * 6. 高级查询:使用游标和范围 * IndexedDB最佳实践 * IndexedDB的浏览器支持情况 * 使用第三方库简化开发 * 常见应用场景 * 总结 引言:为什么需要前端数据库? 在现代Web开发中,我们经常需要处理大量结构化数据。传统的localStorage和sessionStorage虽然简单易用,

新版华三H3C交换机配置NTP时钟步骤 示例(命令及WEB配置)

命令版本  启用NTP服务 默认服务可能未激活,需手动开启: [H3C] ntp-service enable 配置NTP服务器地址 1.1.1.1 在全局配置模式下使用命令ntp-service unicast-server指定NTP服务器IP地址,例如: [H3C] ntp-service unicast-server 1.1.1.1 支持域名或IPv6地址,需确保交换机与NTP服务器网络可达。 设置时区 使用clock timezone命令调整时区,北京时间示例: [H3C] clock timezone Beijing add 08:00:00 [H3C] clock protocol ntp 名称可自定义(如"Beijing"),偏移量需与实际时区匹配。 配置NTP认证(可选) 若服务器需认证,需配置密钥和关联:

如何前端对接豆包api并在抖音直播间实现互动(1/3):注册豆包apikey

如何前端对接豆包api并在抖音直播间实现互动(1/3):注册豆包apikey

前段时间,我女朋友公司正忙着抖音直播买红酒的业务,头疼的是,直播间的在线人数一直少得可怜,每次开播都是寥寥无几的观众,愁得唉声叹气。正好那段时间我手头没什么急事,闲着也是闲着,突然蹦出一个想法,能不能做个插件帮她盘活直播间的氛围。         核心思路就是用前端技术打通几个关键环节:一方面对接豆包的智能对话接口,另一方面嵌入计时器功能,再加上网页元素捕捉的模块。         我去市面上搜了一圈,发现好像还真没有类似的工具。其实这个插件的技术难度不算高,无非是把前端的页面交互、接口调用和网页抓取这些基础技能整合到一起,但感觉特别有意思,看到插件在直播间里正常运转,那种成就感真的很难得。更重要的是,这个小玩意儿能帮到她,让她不用再为直播间没说话而发愁,这就足够了。 首先我们要先去豆包api的官网,进行相关注册和申请权限。 1.访问官网进行相关注册: 火山引擎-你的AI云https://www.volcengine.com/ 2.注册完成后点击上面产品找到豆包大模型 3.进入控制台后点击进入apikey管理并创建apikey 此时我们已经完成api

10分钟构建自动化工作流:Webhook实战指南

10分钟构建自动化工作流:Webhook实战指南 【免费下载链接】webhookwebhook is a lightweight incoming webhook server to run shell commands 项目地址: https://gitcode.com/gh_mirrors/we/webhook 想象一下,每当你的代码仓库有新的提交时,服务器会自动拉取最新代码并重新部署。这种自动化工作流不仅节省时间,还能确保部署的一致性。今天,我将带你使用Webhook工具,快速搭建属于自己的自动化部署系统。 Webhook:你的自动化触发器 Webhook是一个轻量级的Go语言工具,它通过创建HTTP端点来响应外部事件。简单来说,它就像是你服务器的"遥控器" - 当收到特定HTTP请求时,自动执行预设的命令或脚本。 为什么选择Webhook? * 配置简单:只需一个JSON或YAML文件 * 灵活性强:支持多种触发规则和安全机制 * 资源占用少:基于Go语言构建,性能优异