Physical AI
Where AI meets robotics.
Physical AI in our lab connects perception, language and control into one deployable stack. We combine SmolVLA policies, open-vocabulary perception and LeRobot infrastructure to move from data capture to real robot execution with reproducible evaluation.
VLA Skill Execution
Operators define the task in our internal chat in plain language, and π0.5 + ACT policies convert that intent into 50-step action chunks with low-latency control loops for real hardware.
Open-Vocabulary Perception
GroundingDINO + SAM build object grounding and mask-guided ROIs so the system understands scene context, resolves object references, and links language to the correct target.
Dataset-to-Robot Pipeline
Leader-follower teleoperation capture with dual cameras (wrist + top), LeRobot fine-tuning workflows, and checkpoint validation on SO-101 setups provide reproducible deployment.
Execution and Iteration Loop
The robot executes the commanded motion, we monitor results online, and the same chat-guided workflow can refine prompts and behavior in real time without restarting the pipeline.

HexaKinectics
Precision Motion. Human-Level Dexterity. Six-Axis Control.

HexaKinectics (formerly HexaRehab)
The team designed and assembled this Stewart platform as their initial public venture. Development progressed from MATLAB/Simulink prototyping to an open-source ROS2 interface with object-following capabilities powered by NVIDIA technology for real-time performance.
High Dexterity
Optical lens calibration, micro-assembly, sensor alignment, fine manipulation tasks
Precise Movements
Micro-millimetric positioning, vibration testing, trajectory replay, force-controlled motions
Motion Systems
Six-axis motion simulation, teleoperation rigs, control algorithm validation platforms
Winner of “Tu Idea Cuenta 2025” — Industrial Category, Fundación Michelin