This project is part of a Machine Learning exam, focusing on the application of ML techniques to predict the end-effector’s position and orientation in Cartesian space based on joint configurations. The study evaluates the accuracy of different models and compares analytical and learned Jacobians.
Using datasets containing joint angles and their corresponding end-effector states, the goal is to develop machine learning models capable of understanding robotic kinematics. The datasets include:
Joint Angles: Values recorded for different degrees of freedom (DOF).
Trigonometric Features: Cosine and sine values of each joint angle.
Target Outputs: End-effector positions
The project covers robotic systems with varying complexity:
2 DOF (2D)
3 DOF (2D)
5 DOF (3D)
The machine learning task involves training models to predict the end-effector’s position and orientation from joint angles, providing insights into kinematic behavior and enabling efficient robot control.
For a detailed design and performance evaluation, refer to the full report: