Recognition and simulation of actions performable on rigidly-jointed actors such as human bodies have been the subject of our research for some time. One part of an ongoing effort towards a total human movement simulator is to develop a system to perform the actions of American Sign Language (ASL). However, one of the “channels” of ASL communication, the face, presents problems which are not well handled by a rigid model.