Skip to content

A three‑stage exploration of embodied intelligence. EIS builds from First-Principles RL to robotics and perception, and Bayesian active inference. Designed for reproducible research in learning, action, and inference.

License

Notifications You must be signed in to change notification settings

SrEntropy/Embodied-Intelligence-Stack-EIS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Embodied Intelligence Stack (EIS)

A three‑stage exploration of:

  • Learning (Reinforcement Learning)
  • Perception(e.g., SLAM, LiDAR),
  • Inference (Active Inference) in embodied agents.

Resources:

Project Overview

Embodied Intelligence Stack (EIS) is a three‑level exploration of how intelligent agents learn, perceive, and infer from first principles.

  • The project begins with foundational reinforcement learning, building core concepts and algorithms from scratch.
  • Level 2 extends these foundations into robotics, SLAM, and perception, integrating deep RL to enable agents that act in complex, embodied environments.
  • Level 3 culminates in a first‑principles implementation of Bayesian Active Inference, unifying learning, action, and belief‑updating into a coherent cognitive architecture.

EIS is designed as a modular, reproducible research framework that bridges theory, engineering, and embodied intelligence.

Vision

EIS reflects my long‑term research vision to build intelligence from first principles through mechanisms that are mathematically grounded, transparent, and embodied. I am interested in agents whose behavior emerges from the integration of learning, perception, and probabilistic reasoning, not from black‑box optimization alone.

By progressing from foundational RL to robotics and Bayesian Active Inference, EIS expresses a unified view of intelligent behavior: agents should learn from experience, act in the world, and update their beliefs in a principled way. This project is both a technical framework and a statement of research identity aimed at graduate‑level study, robotics labs, and industry teams working on embodied or neuromorphic AI.


Project Structure

Level 1: First‑Principles Reinforcement Learning

Builds the mathematical and algorithmic foundations of RL from scratch.

Focus areas include:

  • Value‑based and policy‑based methods
  • Exploration strategies
  • Tabular and function‑approximation settings
  • Simple agent‑environment loops
  • Clear, transparent implementations

This level establishes the core learning mechanisms that later stages build upon.


Level 2: Robotics, SLAM, and Perception (Deep RL)

Extends Level 1 into embodied settings.

Focus areas include:

  • Robot kinematics and control
  • SLAM fundamentals
  • Visual and sensor‑based perception
  • Deep RL for continuous control
  • Integration of learning with real‑world constraints

This level connects abstract RL principles to physical agents and perceptual pipelines.


Level 3: Bayesian Active Inference (First Principles)

A principled exploration of inference‑driven behavior.

Focus areas include:

  • Generative models and variational inference
  • Perception‑action loops
  • Free‑energy minimization
  • Planning as inference
  • Integration with robotics and perception

This level unifies learning, belief‑updating, and action selection into a single cognitive architecture.


Status

  • Level 1: In progress
  • Level 2: Planned
  • Level 3: Scheduled for delivery by March 15

About

A three‑stage exploration of embodied intelligence. EIS builds from First-Principles RL to robotics and perception, and Bayesian active inference. Designed for reproducible research in learning, action, and inference.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published