Here[.pdf] is my ICRA 2015 paper with an errata (I highlighted the corrections in equation 23 and 30).

This paper was about variable Center-of-Mass height trajectory planning with reactive stepping by using non-linear model predictive control.

Here is an experiment using the proposed approach where a robot walking along a step while also following a variable COM height profile

Usually constant/predefined COM height or models that limit the robot to behaving as if it only has point feet is used when variable COM height is desired because the ZMP equation becomes non-linear when the COM height is a variable. However, it would be desirable to let the COM height be a free variable and let the ZMP move freely within the support polygon without limiting it to the centre of the foot.

In this work, the trajectory planning problem with variable COM height and reactive stepping was formulated as a Quadratically Constrained Quadratic Program (QCQP). This QCQP is then solved using Sequential Quadratic Programming (SQP).

The gradients and Hessian of a quadratic can be found analytically, thus time consuming numerical differentiation is not needed within the SQP solver, this helped to realize realtime performance (about 4 ms compute time on a 3.4 Ghz Quad-core) .

The main trick in this work was to take the linear relationship between the COM states and the COM jerks that is provided by the MPC literature for humanoid robots and substitute this into the expanded ZMP equation, after collecting the terms and cleaning up a little the quadratic ZMP equation was found.

Haptic control usually refers to the bilateral control of the force/position relationship between two robots. One good demonstration of this technology is to have a human move the one robot by hand. In this case the second robot will almost perfectly copy the motion of the first robot and if the second robot collides with something like a wall or a sponge, then the first robot will replicate this collision sensation to the humans hand via the first robot, if the control is done well then the human will feel as if he/she is directly touching the object personally. This video illustrates the concept:

When this technology is combined with humanoid robotics then it could be used in situations where traversing rough terrain is required and human presence is undesirable due to factors such as radiation, explosives, etc.

However one problem here is that Humanoids have underactuated legs, so if the arms apply too large a force on the environment then the robot can fall over. So how is the human to know how much force is too much? We may try indicators but having a software force limit would be more reassuring. What if we used full body haptic control? Giving the human full control may solve all kinds of problems but it introduces some new ones as well, for one the dynamics between the operators body and the humanoid robot will be different so its not clear if the human will be able to control the robot well without at least going  through a training program and then controlling different models of robot may require re-training. So I think that letting the humanoid autonomously limit its arm forces and take corrective action such as changing its trajectory and changing its foot positions is more ideal.

By using model predictive control to regenerate body and feet trajectories online while also considering time changing force/torque values of the wrists, I managed to obtain some experiment results which implement the above mentioned corrective actions.

The interested reader can find more details in my Advanced Robotics 2014 paper (here is a link with 50 free copies while it lasts, I will upload the paper next year once the publishers royalty period expires).

It might happen that your encoders broke(or someone gave you some questionable encoders), now you need to test if their still kicking. Heres a way to do it with a simple voltmeter, or if you want to be fancy, a oscilloscope.