Balancing (postural control) is one of the fundamental motor functions for humans. Without this ability, we would have difficulty performing daily tasks, even biped walking. This topic is mainly studied in the physiology community, and many hypotheses have been tested on human bodies in physiology and medical contexts. A human being is a hyper-redundant system, and the vast size of its motor commands is determined on-line to solve various motor tasks in different environments.
In our project, we are pursuing this crucial research topic by utilizing control theory and machine learning under plausible physiological constraints. If we try to reproduce the same behavior on computer models or robots with similar body functions, we encounter such computational problems involving coordinate transformation and internal model prediction. Our humanoid robot can simulate human musculo-skeletal systems. Using this experimental tool, we explore reasonable control and learning mechanisms for balancing/walking based on the mechanical properties of the given body. Our humanoid robotfs capability, in some sense, imposes a strong constraint on our approach to study human motor control.
We believe that reproducing the similar behaviors of humans on humanoid robots with similar body structure is a fast methodology, not only for understanding our own motor control/learning mechanisms but also for such practical applications as rehabilitation or the development of human-friendly robots.
Humans can control their joint torques. First we implemented a computer program to precisely control whole-body joint torques using force sensors attached to CB-i's body. Then we developed several motion control algorithms in which the control input is defined as the joint torque. The extra benefit is that we can easily simulate the robot behaviors on multi-body dynamic simulators without considering the actuator dynamics that are required in conventional position-control-based robots.
With the full-body torque controller, the robot can freely move its limbs or compensate for gravity to passively follow the external forces applied by humans (see the movie).
One important aspect of gravity compensation is its feedforward property. That is, the robot applies a "known" load to its environment without feedback. Although gravity compensation by torque control is a fundamental robot control widely adopted in industrial robots, it also serves compliant environmental adaptation due to this feedforward nature.
Humans can stably stand on rough terrain by applying forces to multiple contact points on each foot or hand. We developed a practical control algorithm that can optimally and simultaneously control multiple contact forces between the robot and the ground. The following is the computational procedure:
If we command the anti-gravity force, the robot behaves like clay: passively compliant to its surrounding surface and forces.
We can specify some desired center of mass (CoM) motions (time course of acceleration, velocity, position), and convert them into desired ground reaction forces (GRFs) in a feedforward or feedback form and then substitute it into the algorithm 1). If we set the desired CoM motions to zero, the algorithm serves as a balance controller.
The robot can balance on a randomly moving seesaw(see the movie left). It can also dance or take steps by superposing joint patterns onto the controller(see the movie right).
Physiologists have been discussing human push-recovery strategies based on experimental data. We studied such strategies from a control point of view and experimentally implemented them on our humanoid robot.
For example, the movie, shows a strong push-recovery motion using the rotational moment of the upper body, combined with a translational recovery scheme. This combines two strategies: hip and ankle strategies.
As shown in the movie, if the push is too strong to recover, the robot takes a step to regain its balance at the next support phase. Since one leg is swung symmetric to the CoM, the swinging leg phase is synchronized to that of the supporting leg. Such phase synchronization is necessary not only for periodic motions but also for global stability.
Of course, the robot should take multiple steps based on the magnitude of the push, as shown in the three simulation movies(see the movie):
Furthermore, we found that biped walking, which is robust to external disturbances, can be simply achieved by superposing the above symmetric step strategy onto a full-body balancing controller under the velocity control mode. If we turn off the balancing GRF, then the robot is purely gravity compensated, but with symmetric stepping, the robot can walk at almost a constant velocity. When pushed, the robot changes its moving direction like a "ball" rolling on a flat surface (globally stable).
If we activate the velocity controller with the above controller, then the robot can walk at a specified speed and direction (globally asymptotically stable)(see the movie).
1. Motor assistance and rehabilitation
The algorithms created in this project can be directly applied to the autonomous controller for powered suits. Further we will develop learning algorithms for identifying human models including body and motion intension toward comfortable, appropriate motor assistance, or rehabilitation. These algorithms and devices will serve as effective brain-machine interfaces (BMIs) for motor control.
2. Collaborative studies on full-body motor control of humans in different environments
We are interested in human motor control in such extreme environments as space and underwater. We plan to collaborate with neuroscientists and physiologists, focusing on posture and biped walking control. We have demonstrated that our humanoid robot can serve as a very good tool to experimentally evaluate various hypotheses in this field.