This work introduces two novel approaches to feature extractions of video-based Arabic sign language gestures namely: motion representation through motion estimation and motion representation through motion residuals. In the former, motion estimation is used to compute the motion vectors of a video-based gesture. The vertical and horizontal components of such vectors are rearranged into intensity images and transformed into the frequency domain. On the other hand, if motion is represented through motion residuals then such residuals are thresholded and transformed into the frequency domain. The motion information is then temporally accumulated through either telescopic motion vector composition or polar accumulated differences. The feature vectors are extracted from the accumulated motion information. The superiority of the proposed feature extraction techniques is illustrated through comparisons with existing work. © 2007 IEEE.