This project demonstrates the design and implementation of an autonomous robotic system capable of navigating complex environments, identifying objects through sensor fusion, and executing precise manipulation tasks. The robot integrates mechanical design, sensor processing, and multi-threaded control architecture to autonomously traverse a maze, locate and identify colored blocks, and relocate them to corresponding destinations.
Technical Approach:
The system uses a Raspberry Pi as the central controller, coordinating an Adafruit VL53L4CD Time-of-Flight (ToF) distance sensor for spatial awareness, an APDS-9960 RGB color sensor for object identification, TETRIX DC motors for differential-drive locomotion, and servo-actuated mechanisms for object manipulation. The drive base employs two parallel-mounted TETRIX motors, each powering an independent wheel, to maximize torque efficiency, with an omni-wheel mounted at the rear to facilitate low-friction turning and weight distribution. A servo-mounted ToF sensor provides the equivalent of three fixed sensors by dynamically rotating to measure distances in multiple directions, enabling the robot to detect front and side walls with an accuracy of 30 centimeters.
Control Implementation:
The software architecture implements Python-based threading to enable concurrent execution of sensor processing and motion control, allowing continuous ToF distance monitoring without blocking the main control loop. The navigation algorithm employs closed-loop feedback control, continuously comparing sensor data against setpoint values and adjusting motor inputs to minimize positional error and maintain proper maze traversal. Wall detection logic processes ToF measurements to determine when distance readings fall below 30cm thresholds, triggering autonomous decision-making for turns and alignment maneuvers. The robot performs wall-alignment sequences by reversing into detected side walls to correct its heading before continuing forward navigation.
System Integration & Object Manipulation:
Color identification uses a voting algorithm that samples RGB values five times and selects the most common result; the maximum of the red, green, and blue channels determines the detected color. A scissor-arm mechanism with an integrated grabbing claw secures blocks upon detection, controlled by servo motors for jaw actuation and a REV Robotics 40:1 motor for arm angle adjustment. The pulley system uses a TETRIX motor and string-tensioned linkages threaded through each scissor-arm joint to extend the mechanism vertically, though full vertical extension was not achieved in the final implementation due to time constraints. The robot successfully demonstrated autonomous maze navigation with multiple turns, block detection and acquisition, color identification, and transportation to the final destination.