论文阅读“Vision-Language-Action (VLA) Models: Concepts, Progress, Applications and Challenges“

目录

摘要

Vision-Language-Action (VLA) models mark a transformative advancement in artificial intelligence, aiming to unify perception, natural language understanding, and embodied action within a single computational framework. This foundational review presents a comprehensive synthesis of recent advancements in Vision-Language-Action models, systematically organized across five thematic pillars that structure the landscape of this rapidly evolving field. We begin by establishing the conceptual foundations of VLA systems, tracing their evolution from cross-modal learning architectures to generalist agents that tightly integrate vision-language models (VLMs), action planners, and hierarchical controllers.
Our methodology adopts a rigorous literature review framework, covering over 80 VLA models published in the past three years. Key progress areas include architectural innovations, efficient training strategies, and real-time inference accelerations. We explore diverse application domains such as autonomous vehicles, medical and industrial robotics, precision agriculture, humanoid robotics, and augmented reality.
The review further addresses major challenges across real-time control, multimodal action representation, system scalability, generalization to unseen tasks, and ethical deployment risks. Drawing from the state-of-the-art, we propose targeted solutions including agentic AI adaptation, cross-embodiment generalization, and unified neuro-symbolic planning. We outline a forward-looking roadmap where VLA models, VLMs, and agentic AI converge to strengthen socially aligned, adaptive, and general-purpose embodied agents. This work, therefore, is expected to serve as a foundational reference for advancing intelligent, real-world robotics and artificial general intelligence. The project repository is available on GitHub (Source Link).

结论

In this comprehensive review, we systematically evaluated the recent developments, methodologies, and applications of Vision-Language-Action (VLA) models published over the last three years. Our analysis began with the foundational concepts of VLAs, defining their role as multi-modal systems that unify visual perception, natural language understanding, and action generation in physical or simulated environments. We traced their evolution and timeline, detailing key milestones that marked the transition from isolated perception-action modules to fully unified, instruction-following robotic agents. We highlighted how multi-modal integration has matured from loosely coupled pipelines to transformer-based architectures that enable seamless coordination between modalities.
Next, we examined tokenization and representation techniques, focusing on how VLAs encode visual and linguistic information, including action primitives and spatial semantics. We explored learning paradigms, detailing the datasets and training strategies—from supervised learning and imitation learning to reinforcement learning and multi-modal pretraining—that have shaped VLA performance. In the “adaptive control and real-time execution” section, we discussed how modern VLAs are optimized for dynamic environments, analyzing policies that support latency-sensitive tasks. We then categorized major architectural innovations, surveying over 50 recent VLA models. This discussion included advancements in model design, memory systems, and interaction fidelity.
We further studied strategies for training efficiency improvement, including parameter-efficient methods such as LoRA, quantization, and model pruning, alongside acceleration techniques such as parallel decoding and hardware-aware inference. Our analysis of real-world applications highlighted both the promise and current limitations of VLA models across six domains: humanoid robotics, autonomous vehicles, industrial automation, healthcare, agriculture, and augmented reality (AR) navigation. Across these settings, VLAs demonstrated strong capabilities in high-level semantic reasoning, instruction-following, and task generalization, particularly in structured or partially controlled environments. However, their effectiveness was often constrained by real-time inference latency, limited robustness under environmental variability, and reduced precision in long-horizon or safety-critical control when compared to conventional analytical planning and control pipelines. Moreover, application-specific adaptations and extensive data curation were frequently required to achieve reliable performance, underscoring challenges in scalability and deployment. These findings suggest that while VLAs are well-suited for semantic decision making and flexible task specification, hybrid architectures that integrate VLA reasoning with classical or learned low-level controllers remain essential for practical, real-world operation.
In addressing challenges and limitations, we focused on five core areas: real-time inference; multi-modal action representation and safety; bias and generalization; system integration and compute constraints; and ethical deployment. We proposed potential solutions drawn from current literature, including model compression, cross-modal grounding, domain adaptation, and agentic learning frameworks. Finally, our discussion and future roadmap articulated how the convergence of VLMs, VLA architectures, and agentic AI systems is steering robotics toward artificial general intelligence (AGI). This review provides a unified understanding of VLA advancements, identifies unresolved challenges, and outlines a structured path forward for developing intelligent, embodied, and human-aligned agents in the future.

这篇题为《Vision-Language-Action (VLA) Models: Concepts, Progress, Applications and Challenges》的论文是一篇系统性综述,旨在全面梳理和总结近年来兴起的视觉-语言-动作(VLA)模型的研究进展、核心技术、应用场景、面临挑战及未来发展方向。以下是对论文的详细分析:


一、研究背景与动机

1.1 背景

  • 传统AI系统将视觉、语言、动作视为独立模块,分别发展出CNN、LLM、RL等模型。
  • 尽管Vision-Language Models(VLM)在图文理解上取得了进展,但缺乏对物理世界行动的生成能力
  • 这导致机器人系统难以在真实环境中实现灵活、泛化、端到端的任务执行

1.2 动机

  • 提出VLA模型作为统一框架,整合视觉感知、语言理解和动作执行。
  • 旨在推动具身智能(Embodied AI) 的发展,实现真正意义上的通用机器人。

二、VLA模型的核心概念

2.1 定义

VLA模型是一种多模态智能系统,能够:

  • 感知:通过视觉编码器(如ViT、CNN)理解图像或视频;
  • 理解:通过语言模型(如BERT、LLaMA)解析指令;
  • 行动:通过策略模块生成机器人可执行的动作序列。

2.2 三大发展阶段

  1. 2022–2023(基础融合期):如CLIPort、RT-1、Gato,初步实现视觉-语言-动作的融合。
  2. 2024(专用推理期):如VoxPoser、RT-2、Octo,引入视觉推理和扩散策略。
  3. 2025(安全与泛化期):如SafeVLA、Humanoid-VLA,强调鲁棒性、安全性和跨平台泛化。

三、核心技术分析

3.1 多模态融合

  • 通过Transformer架构实现视觉、语言和状态信息的联合建模。
  • 使用交叉注意力机制、联合嵌入、前缀token等技术实现语义对齐。

3.2 统一Token化

  • Prefix Tokens:编码视觉场景和语言指令;
  • State Tokens:编码机器人当前状态(如关节角度、力反馈);
  • Action Tokens:通过自回归生成器生成动作序列,类似于语言生成。

3.3 学习策略

  • 互联网级预训练:如LAION-5B、HowTo100M;
  • 机器人轨迹数据:如RT-X、BridgeData;
  • 多阶段训练:先对齐语义,再学习动作,最后进行任务微调。

四、代表性模型总结

论文中列出了超过45个VLA模型,按时间线分为三类:

模型类别示例特点
早期融合模型CLIPort、RT-1、Gato基础融合,端到端控制
扩散策略模型Diffusion Policy、Pi-0多模态动作生成,适应性强
双系统架构GR00T N1、HybridVLA高维规划+低维控制分离,提升效率与安全

五、应用场景分析

5.1 人形机器人

  • HelixRoboNurse-VLA,能执行复杂任务如开门、取物、手术辅助;
  • 强调语言指令理解 + 动态环境适应 + 安全控制

5.2 自动驾驶

  • OpenDriveVLAORION,融合视觉+语言指令生成驾驶行为;
  • 强调可解释性闭环控制

5.3 工业制造

  • CogACT,支持多步骤装配、工具切换;
  • 强调泛化能力任务组合性

5.4 医疗与农业

  • RoboNurse-VLAUAV-VLA,支持精细操作与远程指令执行;
  • 强调高精度人机协作

5.5 增强现实导航

  • AR交互系统,通过视觉+语言生成实时导航提示;
  • 强调实时性个性化适应

六、挑战与局限

挑战类别具体问题
实时推理自回归生成慢,难以满足高频控制需求
动作表示离散化动作精度不足,扩散模型计算开销大
安全性模型在未知环境中缺乏鲁棒性,难以保障物理安全
数据集偏差网络数据存在偏见,影响模型泛化
系统集成高维视觉与低维控制难以对齐
伦理与隐私模型可能泄露隐私、加剧社会不平等

七、未来发展方向

7.1 统一基础模型

  • 构建“大脑”级别的多模态基础模型,统一感知、推理与行动。

7.2 持续学习与适应性

  • 引入Agentic AI,使模型能在部署后持续学习和自我优化。

7.3 神经符号规划

  • 结合符号推理与神经网络,提升任务分解与可解释性。

7.4 世界模型与因果推理

  • 通过预测未来状态,增强模型对物理世界的理解与控制。

7.5 高效部署

  • 模型压缩、量化、并行解码等技术,实现边缘端部署。

7.6 安全与伦理对齐

  • 构建可审计、可解释、符合人类价值观的VLA系统。

八、总结与贡献

  • 本文是首篇系统梳理VLA模型的综述,涵盖概念、模型、方法、应用、挑战与未来方向
  • 提出了五维分析框架:概念基础、技术进步、应用场景、挑战与解决方案、未来路线图。
  • 强调VLA是实现具身智能的关键路径,并指出了实现AGI的潜在方向。

Read more

OpenClaw 接入飞书机器人保姆级教程

OpenClaw 接入飞书机器人保姆级教程

如果你的 OpenClaw 已完成初始部署、WebUI 可正常收发回复,现在想接入飞书机器人,这篇教程会带你从创建机器人到配置完成,一步到位。 相信你在部署 OpenClaw 时已经踩过不少坑,这篇文章会帮你尽量避开飞书对接中的常见问题,少走弯路。废话不多说,教程正式开始!原文地址 内置飞书插件 如果您使用的是最新版本的 OpenClaw那么已经内置了 Feishu 插件,通常不需要让我们单独进行安装。 如果您使用的是之前比较旧的版本,或者是没有内置的 Feishu 的插件,可以手动进行安装,执行下方命令: 创建飞书机器人 我们先来创建飞书的应用,我们可以复制下方地址进行一键直达 创建企业自建应用 打开后,我们点击【创建企业自建应用】,如果您还没有飞书账号的话,请先注册飞书的账号后再进行创建应用 我们创建企业自建应用然后输入应用名称和应用描述,还有应用图标,我们都可以自定义进行上传,或者选择其他照片当作应用图标。输入完之后我们点击创建 获取 AppID 和 AppSecret 我们点击凭证与基础信息一栏查看我们的App ID 和 App

宇树科技机器人核心技术

宇树科技机器人核心技术

前言 宇树科技作为全球足式/人形机器人领域的标杆企业,其技术体系覆盖消费级(Go2)、工业级(B2)、人形(G1/H1)全产品线,以“硬件自研+软件全栈+AI赋能”构建核心壁垒。本文不仅拆解宇树机器人的关键技术(单硬件、单软件、软硬件协同、AI+),还配套就业技能图谱、学习路线与工具推荐,适合机械、电子、计算机、AI领域开发者/求职者参考。 一、宇树科技机器人核心技术全景(附插图建议) 宇树的技术体系可概括为“四层金字塔结构”,从下到上实现“能运动→会运动→智能运动”的进阶: 技术层级核心定位代表技术应用价值底层硬件机器人“躯体骨架”自研伺服电机、分层计算平台、4D激光雷达保障运动性能与环境适配性全栈软件机器人“智慧大脑”MPC/WBC控制算法、SLAM感知融合、ROS2中间件实现精准控制与灵活交互软硬件协同机器人“神经中枢”实时控制闭环、

[论文阅读] AI + 软件工程 | 突破LLM上下文瓶颈:上下文内存虚拟化CMV的设计与实践

[论文阅读] AI + 软件工程 | 突破LLM上下文瓶颈:上下文内存虚拟化CMV的设计与实践

突破LLM上下文瓶颈:上下文内存虚拟化CMV的设计与实践 论文基础信息 * 原标题:Contextual Memory Virtualisation: DAG-Based State Management and Structurally Lossless Trimming for LLM Agents * 主要作者:Cosmo Santoni * 研究机构:帝国理工学院(Imperial College London) * 发表时间:2026年2月 * 引文格式(GB/T 7714):SANTONI C. Contextual memory virtualisation: DAG-based state management and structurally lossless trimming for LLM agents[EB/OL]. [2026-02-25]. arXiv:

Hermes Agent 新手教程:一步一步跑通安装、模型和飞书机器人(小白能上手,可复制命令)

Hermes Agent 新手教程:一步一步跑通安装、模型和飞书机器人(小白能上手,可复制命令)

我把 Hermes + 飞书从 0 跑通了:5 分钟上手 + 全套踩坑修复命令(可直接复制) 文 / 测试员周周 这是 Hermes 系列第 2 篇,也是实操篇。 如果你也遇到过这些场景,这篇就是给你写的: * Hermes 装好了,但飞书机器人不回 * gateway 明明是 running,发消息还是没反应 * 一开口就是 401,看不懂到底是飞书错还是模型错 上一篇我们讲“为什么 Hermes 火”,这一篇只做一件事:让你真的跑起来。 我会把这次真实实操里踩过的坑全部摊开,包括: * 安装后 No module named yaml/dotenv 怎么修 * av/cython 报错时怎么先绕过,优先跑通文本链路 * 飞书网关明明 running,