hero-image
HOME
hero-image

Color Detecting Pick and Place Robot

hero-image
Khang Diep

OVERVIEW

Over the course of 8 weeks, my teammates and I programmed a 4-degree-of-freedom robot arm and a camera using MATLAB to detect different colored balls, pick them up, and deposit them in the correct corresponding bin.

Additional Details

Throughout the span of 8 weeks, my team and I have gotten to know the OpenManipulator-X robot arm used in our class quite well. The four DYNAMIXEL joint motors control the four degrees of freedom of the arm, with a servo gripper acting as an end effector. The arm can control the x, y, z, and pitch of the end effector, creating a relatively dextrous workspace. 

The first lab of RBE 3001 saw us get accustomed to the architecture and design of the OpenManipulator-X. We explored the MATLAB code base and Github repository, with the biggest objective being adding in 3 helper methods for servo control and data collection. We did our first data collecting and graph plotting as well, which served as great preparation for the rest of the plotting necessary in this class. Lab 2 had us solve for the forward kinematics of the OpenManipulator-X using the Denavit-Hartenberg convention. We made two methods, one for translating DH rows to transformation matrices, and another for turning entire DH tables to a single transformation matrix. Using these two methods, we created a symbolic forward kinematic function, which we then optimized by turning it into a concrete function in a saved file. In Lab 3, we implemented inverse kinematics along with trajectory planning. We used a geometric approach for the IK, solving the problem on paper and then transcribing the derivation to a MATLAB function. The trajectory functions are relatively simple, using known equations/matrices for cubic and quintic trajectories to find the coefficients for each respective trajectory, with an evaluation function solving multiple trajectories at a timestep. The fourth lab had us formulate Jacobian matrices and use them to implement differential kinematics. We also used the Jacobian to detect singularities, as the determinant of the Jacobian is zero at singularities. The differential kinematics were used to track position velocities as well as to create numerical inverse kinematics.The fifth and final lab compiles all of the previous labs to finally complete the end goal of sorting balls by color. To do this, we needed to access the USB camera and properly process video footage to analyze and determine the colors and locations of balls on the checkerboard. 

In conclusion, our robot was a great success. We were able to implement forward and inverse kinematics in tandem to control our robot’s end effector. Additionally, we were able to make use of the camera to take snapshots, filter out images, and detect the centroids of the balls placed on the checkerboard. By combining these two, we effectively and efficiently picked up each ball and dropped it in its corresponding bin. To go beyond the scope of the assignment, we implemented numerical inverse kinematics and a spline trajectory to create extremely smooth and quick movements. Through completing this final project, we were able to directly apply the concepts that we have been learning in class over the past 8 weeks. We learned about how to create DH tables and use them to find our end effector’s position. Conversely, we learned to use our robot's geometry and calculate each joint value given the end effector’s position. We were also able to express the robot's physical XYZ positions and joint positions in a Jacobian matrix, which we later used to develop our numerical inverse kinematics and to help us detect singularities. This specific process taught us the complexities of inverse kinematics in higher-level degree-of-freedom robots, and why numerical inverse kinematics is typically used over analytical inverse kinematics. To actually process the images that we saw on the checkerboard into usable coordinates, we learned about frame transformations to take us from the camera to the checkerboard frame, and finally to the robot’s frame. As part of this image process, we learned about applying different image masks to isolate the components of the image that we wanted, getting rid of everything else. On top of completing the given tasks, we also created our own motion detection function, which would allow us to put balls on the checkerboard without being interrupted by the robot arm prematurely moving to grab them. In order to accomplish this, we had to learn about comparing two images and seeing if that difference was above a given threshold. If it were, we knew that the images were different enough that there was movement happening on the checkerboard. Overall, this lab challenged us to apply the concepts of robotic kinematics while also combining it with a sensor, like the camera. We were also able to explore and go more in-depth into each topic, creating additional methods that allowed our robot to accomplish its task to great success. 




lowinertia
Portfolio Builder for Engineers
Created by Aram Lee
© 2025 Low Inertia. All rights reserved.