ACT
ACT(Action Chunking with Transformers)是面向机器人学习场景的高性能端到端动作控制模型。相比传统模块化机器人控制模型,ACT采用轻量化Transformer架构作为核心骨干进行动作表征学习,结合多模态感知融合模块和时序动作优化网络,在控制精度和实时响应速度上均有显著提升。
Mirror Metadata
- Hugging Face repo: shadow-cann/hispark-modelzoo-act
- Portal model id: ivcifqkd0400
- Created at: 2026-03-03 10:30:33
- Updated at: 2026-03-04 16:06:22
- Category: 多模态
Framework
- PyTorch
Supported OS
- OpenEuler
Computing Power
- Hi3403V100 SVP_NNN
Tags
- 具身智能
Detail Parameters
- 输入: 1 x 6;1 x 3 x 240 x 320;1 x 3 x 240 x 320
- 参数量: 87 M
- 计算量: 8.02 GFLOPs
Files In This Repo
- ACT.zip (源模型 / 源模型下载; 源模型 / 源模型元数据)
- act_distill_fp32_for_mindcmd_simp_release.om (编译模型 / OM 元数据 / a16w8)
- SVP_NNN_PC_V1.0.6.0.tgz (附加资源 / 附加资源)
Upstream Links
- Portal card: https://gitbubble.github.io/hisilicon-developer-portal-mirror/model-detail.html?id=ivcifqkd0400
- Upstream repository: https://gitee.com/HiSpark/modelzoo/blob/master/samples/contribute/ACT/README.md
- License reference: https://github.com/tonyzhaozh/act/blob/main/LICENSE
Notes
- This repository was mirrored from the HiSilicon Developer Portal model card and local downloads captured on 2026-03-27.
- File ownership follows the portal card mapping, not just filename similarity.
- Cover image: 1731868158459906_____.png
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support