A three‑stage exploration of:
- Learning (Reinforcement Learning)
- Perception(e.g., SLAM, LiDAR),
- Inference (Active Inference) in embodied agents.
Resources:
- Navigation and Exploration with Active Inference: from Biology to Industry
- Tutorial 1: Active inference from scratch
- A step-by-step tutorial on active inference and its application to empirical data
- How Active Inference Could Help Revolutionise Robotics
Embodied Intelligence Stack (EIS) is a three‑level exploration of how intelligent agents learn, perceive, and infer from first principles.
- The project begins with foundational reinforcement learning, building core concepts and algorithms from scratch.
- Level 2 extends these foundations into robotics, SLAM, and perception, integrating deep RL to enable agents that act in complex, embodied environments.
- Level 3 culminates in a first‑principles implementation of Bayesian Active Inference, unifying learning, action, and belief‑updating into a coherent cognitive architecture.
EIS is designed as a modular, reproducible research framework that bridges theory, engineering, and embodied intelligence.
EIS reflects my long‑term research vision to build intelligence from first principles through mechanisms that are mathematically grounded, transparent, and embodied. I am interested in agents whose behavior emerges from the integration of learning, perception, and probabilistic reasoning, not from black‑box optimization alone.
By progressing from foundational RL to robotics and Bayesian Active Inference, EIS expresses a unified view of intelligent behavior: agents should learn from experience, act in the world, and update their beliefs in a principled way. This project is both a technical framework and a statement of research identity aimed at graduate‑level study, robotics labs, and industry teams working on embodied or neuromorphic AI.
Builds the mathematical and algorithmic foundations of RL from scratch.
Focus areas include:
- Value‑based and policy‑based methods
- Exploration strategies
- Tabular and function‑approximation settings
- Simple agent‑environment loops
- Clear, transparent implementations
This level establishes the core learning mechanisms that later stages build upon.
Extends Level 1 into embodied settings.
Focus areas include:
- Robot kinematics and control
- SLAM fundamentals
- Visual and sensor‑based perception
- Deep RL for continuous control
- Integration of learning with real‑world constraints
This level connects abstract RL principles to physical agents and perceptual pipelines.
A principled exploration of inference‑driven behavior.
Focus areas include:
- Generative models and variational inference
- Perception‑action loops
- Free‑energy minimization
- Planning as inference
- Integration with robotics and perception
This level unifies learning, belief‑updating, and action selection into a single cognitive architecture.
- Level 1: In progress
- Level 2: Planned
- Level 3: Scheduled for delivery by March 15