A sign language recognition system translates signs performed by deaf individuals into text/speech in real time. Inertial measurement unit and surface electromyography (sEMG) are both useful modalities to detect hand/arm gestures. They are able to capture signs and the fusion of these two complementary sensor modalities will enhance system performance. In this paper, a wearable system for recognizing American Sign Language (ASL) in real time is proposed, fusing information from an inertial sensor and sEMG sensors. An information gain-based feature selection scheme is used to select the best subset of features from a broad range of well-established features. Four popular classification algorithms are evaluated for 80 commonly used ASL signs on four subjects. The experimental results show 96.16% and 85.24% average accuracies for intra-subject and intra-subject cross session evaluation, respectively, with the selected feature subset and a support vector machine classifier. The significance of adding sEMG for ASL recognition is explored and the best channel of sEMG is highlighted.
Support the authors with ResearchCoin
Support the authors with ResearchCoin