Odin1 On-Robot

Working with Odin1

This project showcases multiple demo pipelines built on Odin1. Each demo can be used independently or combined as an integrated stack.

SLAM

Build 3D maps for fast, persistent relocalization, providing positioning anchors for navigation.

It may take some time to load the 3D model.

Navigation

Two navigation stacks address different requirements across various scenarios:

  • Modified ROS Navigation Stack
  • End-to-End Local Planning

Object Navigation

Detect objects and generate navigation goals using concise text or voice commands.

VLM Scene Understanding

Provides vision-language model scene understanding and description.

VLN

Enables online vision-language navigation for object tracking.

More Exciting Features in Development

We're actively expanding Odin1's capabilities with experimental modules—ranging from multi-floor navigation and human-aware motion planning to on-device large language model (LLM) agents for autonomous task reasoning. Stay tuned for updates!


Acknowledgements

Thanks to the excellent work by ROS Navigation, NeuPAN, Ultralytics YOLO and Qwen.

Special thanks to hanruihua, KevinLADLee and bearswang for their technical support.

License

All code is released under the Apache v2.0.