hero-image
HOME
hero-image
project-highlight-image

Autonomous Human-Following Robot using Machine Learning

hero-image
Giovanni Dal Lago

Project Timeline

Sep 2025 - Dec-2025

OVERVIEW

This project implements a vision-based autonomous control system for a mobile robot using both deterministic and neural-network policies. A DepthAI camera provides person detections, which are used to center the target in the image and maintain a desired distance. An Extended Kalman Filter fuses onboard sensors for state estimation, while an occupancy grid map supports environmental awareness and obstacle handling. The result is a compact, fully integrated ROS2 pipeline capable of real-time human tracking and autonomous motion control. (Demo in attachments section)

HighlightS

  • Implemented vision-based person detection using DepthAI and class filtering.
  • Designed both neural-network policy control and deterministic control for person-following.
  • Integrated an Extended Kalman Filter for state estimation.
  • Built and updated an occupancy grid map for navigation and obstacle handling.
  • Developed ROS2 nodes for control, tracking, mapping, and motor command execution.

SKILLS

ROS2PythonC++UbuntuPyTorchSci-kit

SUPPORTING MATERIALS

Additional Details

for the complete work space check out my GitHub


The project implements a complete vision-based human-following system on a GoBilda mobile robot using a combination of classical control, imitation learning, and ROS2 integration. A DepthAI camera provides person detections through a Mobilenet-based model, from which the system extracts two key signals: the horizontal bounding-box center cxcx and bounding-box height ss, used as a proxy for distance. A custom ROS2 logging node records synchronized detection data and teleoperation commands at 10 Hz, producing a dataset that is later cleaned, normalized, and augmented with temporal history.

Data preprocessing transforms raw pixel measurements into normalized control errors and constructs a six-dimensional feature vector representing three consecutive timesteps. Teleoperation commands are discretized into four action classes, framing human following as a classification task. Two MLP models are trained and evaluated; a deeper 64–32 architecture is selected based on improved separation between forward and turning actions and a higher overall accuracy.

A deterministic proportional controller is also developed as a baseline for comparison. Both the learned policy and the deterministic controller are deployed as ROS2 nodes, subscribing to detections, computing control actions, and publishing velocity commands in real time. Their performance in indoor corridor tests shows remarkably similar behavior, confirming the effectiveness of the imitation-learning pipeline. Outdoor experiments further validate robustness under challenging lighting and background conditions.

Overall, the project delivers a fully operational human-following pipeline, covering perception, data collection, machine-learning-based policy learning, baseline control, and real-time robotic deployment.

lowinertia
Portfolio Builder for Engineers
Created by Aram Lee
© 2025 Low Inertia. All rights reserved.