Now that we are familiar with forward kinematics we might want to ask the inverse question, that is, given a desired Cartesian position, X, what set of joint angles are needed to achieve this Cartesian position ? Unfortunately this process is complicated by the fact that a manipulator could achieve a target Cartesian positions with a variety of different joint angles as shown to the right. Another feature of inverse kinematics in general is the large number of manipulator specific solutions, that is a particular solution for a particular manipulator. In this tutorial I will discuss a general method which is applicable to all serial manipulators. While it is very general, it can have very high computational requirements under certain conditions and in general solutions which are tailored for specific robots tend to be much faster, if you are interested in such tailor made solutions then a good place to start would be to search for geometric solutions of series manipulators. Once we know how to solve the inverse kinematics problem then we will have a set of joint angles which we can use in our next section on position control.
The method of solving the inverse kinematics in a general way presented here is sort of a gradient descent method based on the fact that the kinematic Jacobian relates joint velocities to Cartesian velocities. The basic idea is that for a small timestep the following holds approximately true for some Cartesian vector “x”.
By using this iterative relation we can move the manipulator in a Cartesian direction. So if we make the Cartesian velocity be proportional to the Cartesian error then the above iteration will move the manipulator in a direction which reduces the Cartesian error(here FK() is the forward kinematics function).
So then the iteration which will make the manipulators Cartesian position converge to the reference position is the following
Once the position error decreases to a sufficiently small value, then the iteration can be terminated and the resultant joint angles can be said to be “good enough”. Of course it may also happen that it takes longer than a single control period for the algorithm to converge. In such a situation the maximum amount of iterations should be limited such that the processor can still complete all its task in a single control period and hope fully the fact that the algorithm is re-run every control period makes it converge in the long run. The logic flow of the algorithm can be seen to the right.
Next: Disturbance Observer
Previous: Kinematic Jacobian