This project showcases multiple demo pipelines built on Odin1. Each demo can be used independently or combined as an integrated stack.
Build 3D maps for fast, persistent relocalization, providing positioning anchors for navigation.
It may take some time to load the 3D model.
Two navigation stacks address different requirements across various scenarios:
Detect objects and generate navigation goals using concise text or voice commands.
Provides vision-language model scene understanding and description.
Enables online vision-language navigation for object tracking.
We're actively expanding Odin1's capabilities with experimental modules—ranging from multi-floor navigation and human-aware motion planning to on-device large language model (LLM) agents for autonomous task reasoning. Stay tuned for updates!
Thanks to the excellent work by ROS Navigation, NeuPAN, Ultralytics YOLO and Qwen.
Special thanks to hanruihua, KevinLADLee and bearswang for their technical support.
All code is released under the Apache v2.0.