论文阅读“Vision-Language-Action (VLA) Models: Concepts, Progress, Applications and Challenges“

目录

摘要

Vision-Language-Action (VLA) models mark a transformative advancement in artificial intelligence, aiming to unify perception, natural language understanding, and embodied action within a single computational framework. This foundational review presents a comprehensive synthesis of recent advancements in Vision-Language-Action models, systematically organized across five thematic pillars that structure the landscape of this rapidly evolving field. We begin by establishing the conceptual foundations of VLA systems, tracing their evolution from cross-modal learning architectures to generalist agents that tightly integrate vision-language models (VLMs), action planners, and hierarchical controllers.
Our methodology adopts a rigorous literature review framework, covering over 80 VLA models published in the past three years. Key progress areas include architectural innovations, efficient training strategies, and real-time inference accelerations. We explore diverse application domains such as autonomous vehicles, medical and industrial robotics, precision agriculture, humanoid robotics, and augmented reality.
The review further addresses major challenges across real-time control, multimodal action representation, system scalability, generalization to unseen tasks, and ethical deployment risks. Drawing from the state-of-the-art, we propose targeted solutions including agentic AI adaptation, cross-embodiment generalization, and unified neuro-symbolic planning. We outline a forward-looking roadmap where VLA models, VLMs, and agentic AI converge to strengthen socially aligned, adaptive, and general-purpose embodied agents. This work, therefore, is expected to serve as a foundational reference for advancing intelligent, real-world robotics and artificial general intelligence. The project repository is available on GitHub (Source Link).

结论

In this comprehensive review, we systematically evaluated the recent developments, methodologies, and applications of Vision-Language-Action (VLA) models published over the last three years. Our analysis began with the foundational concepts of VLAs, defining their role as multi-modal systems that unify visual perception, natural language understanding, and action generation in physical or simulated environments. We traced their evolution and timeline, detailing key milestones that marked the transition from isolated perception-action modules to fully unified, instruction-following robotic agents. We highlighted how multi-modal integration has matured from loosely coupled pipelines to transformer-based architectures that enable seamless coordination between modalities.
Next, we examined tokenization and representation techniques, focusing on how VLAs encode visual and linguistic information, including action primitives and spatial semantics. We explored learning paradigms, detailing the datasets and training strategies—from supervised learning and imitation learning to reinforcement learning and multi-modal pretraining—that have shaped VLA performance. In the “adaptive control and real-time execution” section, we discussed how modern VLAs are optimized for dynamic environments, analyzing policies that support latency-sensitive tasks. We then categorized major architectural innovations, surveying over 50 recent VLA models. This discussion included advancements in model design, memory systems, and interaction fidelity.
We further studied strategies for training efficiency improvement, including parameter-efficient methods such as LoRA, quantization, and model pruning, alongside acceleration techniques such as parallel decoding and hardware-aware inference. Our analysis of real-world applications highlighted both the promise and current limitations of VLA models across six domains: humanoid robotics, autonomous vehicles, industrial automation, healthcare, agriculture, and augmented reality (AR) navigation. Across these settings, VLAs demonstrated strong capabilities in high-level semantic reasoning, instruction-following, and task generalization, particularly in structured or partially controlled environments. However, their effectiveness was often constrained by real-time inference latency, limited robustness under environmental variability, and reduced precision in long-horizon or safety-critical control when compared to conventional analytical planning and control pipelines. Moreover, application-specific adaptations and extensive data curation were frequently required to achieve reliable performance, underscoring challenges in scalability and deployment. These findings suggest that while VLAs are well-suited for semantic decision making and flexible task specification, hybrid architectures that integrate VLA reasoning with classical or learned low-level controllers remain essential for practical, real-world operation.
In addressing challenges and limitations, we focused on five core areas: real-time inference; multi-modal action representation and safety; bias and generalization; system integration and compute constraints; and ethical deployment. We proposed potential solutions drawn from current literature, including model compression, cross-modal grounding, domain adaptation, and agentic learning frameworks. Finally, our discussion and future roadmap articulated how the convergence of VLMs, VLA architectures, and agentic AI systems is steering robotics toward artificial general intelligence (AGI). This review provides a unified understanding of VLA advancements, identifies unresolved challenges, and outlines a structured path forward for developing intelligent, embodied, and human-aligned agents in the future.

这篇题为《Vision-Language-Action (VLA) Models: Concepts, Progress, Applications and Challenges》的论文是一篇系统性综述,旨在全面梳理和总结近年来兴起的视觉-语言-动作(VLA)模型的研究进展、核心技术、应用场景、面临挑战及未来发展方向。以下是对论文的详细分析:


一、研究背景与动机

1.1 背景

  • 传统AI系统将视觉、语言、动作视为独立模块,分别发展出CNN、LLM、RL等模型。
  • 尽管Vision-Language Models(VLM)在图文理解上取得了进展,但缺乏对物理世界行动的生成能力
  • 这导致机器人系统难以在真实环境中实现灵活、泛化、端到端的任务执行

1.2 动机

  • 提出VLA模型作为统一框架,整合视觉感知、语言理解和动作执行。
  • 旨在推动具身智能(Embodied AI) 的发展,实现真正意义上的通用机器人。

二、VLA模型的核心概念

2.1 定义

VLA模型是一种多模态智能系统,能够:

  • 感知:通过视觉编码器(如ViT、CNN)理解图像或视频;
  • 理解:通过语言模型(如BERT、LLaMA)解析指令;
  • 行动:通过策略模块生成机器人可执行的动作序列。

2.2 三大发展阶段

  1. 2022–2023(基础融合期):如CLIPort、RT-1、Gato,初步实现视觉-语言-动作的融合。
  2. 2024(专用推理期):如VoxPoser、RT-2、Octo,引入视觉推理和扩散策略。
  3. 2025(安全与泛化期):如SafeVLA、Humanoid-VLA,强调鲁棒性、安全性和跨平台泛化。

三、核心技术分析

3.1 多模态融合

  • 通过Transformer架构实现视觉、语言和状态信息的联合建模。
  • 使用交叉注意力机制、联合嵌入、前缀token等技术实现语义对齐。

3.2 统一Token化

  • Prefix Tokens:编码视觉场景和语言指令;
  • State Tokens:编码机器人当前状态(如关节角度、力反馈);
  • Action Tokens:通过自回归生成器生成动作序列,类似于语言生成。

3.3 学习策略

  • 互联网级预训练:如LAION-5B、HowTo100M;
  • 机器人轨迹数据:如RT-X、BridgeData;
  • 多阶段训练:先对齐语义,再学习动作,最后进行任务微调。

四、代表性模型总结

论文中列出了超过45个VLA模型,按时间线分为三类:

模型类别示例特点
早期融合模型CLIPort、RT-1、Gato基础融合,端到端控制
扩散策略模型Diffusion Policy、Pi-0多模态动作生成,适应性强
双系统架构GR00T N1、HybridVLA高维规划+低维控制分离,提升效率与安全

五、应用场景分析

5.1 人形机器人

  • HelixRoboNurse-VLA,能执行复杂任务如开门、取物、手术辅助;
  • 强调语言指令理解 + 动态环境适应 + 安全控制

5.2 自动驾驶

  • OpenDriveVLAORION,融合视觉+语言指令生成驾驶行为;
  • 强调可解释性闭环控制

5.3 工业制造

  • CogACT,支持多步骤装配、工具切换;
  • 强调泛化能力任务组合性

5.4 医疗与农业

  • RoboNurse-VLAUAV-VLA,支持精细操作与远程指令执行;
  • 强调高精度人机协作

5.5 增强现实导航

  • AR交互系统,通过视觉+语言生成实时导航提示;
  • 强调实时性个性化适应

六、挑战与局限

挑战类别具体问题
实时推理自回归生成慢,难以满足高频控制需求
动作表示离散化动作精度不足,扩散模型计算开销大
安全性模型在未知环境中缺乏鲁棒性,难以保障物理安全
数据集偏差网络数据存在偏见,影响模型泛化
系统集成高维视觉与低维控制难以对齐
伦理与隐私模型可能泄露隐私、加剧社会不平等

七、未来发展方向

7.1 统一基础模型

  • 构建“大脑”级别的多模态基础模型,统一感知、推理与行动。

7.2 持续学习与适应性

  • 引入Agentic AI,使模型能在部署后持续学习和自我优化。

7.3 神经符号规划

  • 结合符号推理与神经网络,提升任务分解与可解释性。

7.4 世界模型与因果推理

  • 通过预测未来状态,增强模型对物理世界的理解与控制。

7.5 高效部署

  • 模型压缩、量化、并行解码等技术,实现边缘端部署。

7.6 安全与伦理对齐

  • 构建可审计、可解释、符合人类价值观的VLA系统。

八、总结与贡献

  • 本文是首篇系统梳理VLA模型的综述,涵盖概念、模型、方法、应用、挑战与未来方向
  • 提出了五维分析框架:概念基础、技术进步、应用场景、挑战与解决方案、未来路线图。
  • 强调VLA是实现具身智能的关键路径,并指出了实现AGI的潜在方向。

Read more

【论文阅读】Gaussian Grouping: Segment and Edit Anything in 3D Scenes

【论文阅读】Gaussian Grouping: Segment and Edit Anything in 3D Scenes

摘要 高斯投影(Gaussian Splatting)实现了高质量、实时的三维场景新视点合成。不过,它仅专注于外观和几何建模,缺乏对细粒度的物体级场景理解。为了解决这一问题,我们提出了 Gaussian Grouping,将高斯点扩展为联合重建和分割开放世界三维场景中的任意内容。我们为每个高斯添加了一个紧凑的身份编码(Identity Encoding),使得这些高斯点能够根据其在三维场景中的物体实例或“物体/背景”的成员关系进行分组。并不依赖昂贵的三维标签,我们在可微渲染过程中通过利用 Segment Anything Model (SAM) 的二维掩码预测,以及引入的三维空间一致性正则化,对身份编码进行监督。与隐式的 NeRF 表示相比,我们表明离散且分组的三维高斯点能够在三维中以高视觉质量、细粒度和高效性来重建、分割和编辑任意内容。 引言 本文旨在构建一个 expressive 的三维场景表示,不仅对外观和几何进行建模,还捕捉场景中每个实例和物体的身份信息。我们的方法以最近的三维高斯投影(Gaussian Splatting)为基础,将其从纯粹的三维重建扩展到细粒度的场景

【ROS 2】运行 ROS 2 机器人 ( ROS 2 机器人示例 - 海龟仿真器 | ROS 节点分析工具 - rqt | ros2 run 命令解析 | ros2 run 基础格式和完整格式 )

【ROS 2】运行 ROS 2 机器人 ( ROS 2 机器人示例 - 海龟仿真器 | ROS 节点分析工具 - rqt | ros2 run 命令解析 | ros2 run 基础格式和完整格式 )

文章目录 * 一、ROS 2 机器人示例 - 海龟仿真器 * 1、启动海龟仿真器节点 * 2、启动控制节点 * 3、ROS 节点分析工具 - rqt * 二、ros2 run 命令解析 * 1、设计理念 * 2、ros2 run 基础格式 * 3、ros2 run 完整格式 * 4、启动海龟仿真器命令分析 在上一篇博客 【ROS 2】ROS 2 Humble 完整环境配置 ( VirtualBox 7.2.4 + Ubuntu 22.04.5 LTS + ROS 2

政安晨【零基础玩转开源AI项目】OpenClaw飞书通信端机器人配置指南(手把手配置OpenClaw飞书/Lark机器人,实现多渠道AI助手集成)(作者自己配置时留存使用,小伙伴们可酌情参考)

政安晨【零基础玩转开源AI项目】OpenClaw飞书通信端机器人配置指南(手把手配置OpenClaw飞书/Lark机器人,实现多渠道AI助手集成)(作者自己配置时留存使用,小伙伴们可酌情参考)

政安晨的个人主页:政安晨 欢迎 👍点赞✍评论⭐收藏 希望政安晨的博客能够对您有所裨益,如有不足之处,欢迎在评论区提出指正! 目录 一、前言 1.1 为什么需要配置飞书机器人? 1.2 飞书机器人支持的功能 二、准备工作 2.1 环境要求 2.2 OpenClaw安装(本篇主要介绍飞书端的配置,这里可参考我上一篇博客) 2.3 飞书账号要求 三、飞书应用创建 3.1 创建企业应用 3.2 获取应用凭证 编辑3.3 开通权限 3.4 配置事件订阅 Webhook URL配置 订阅事件 3.5