In this paper we present a method for recognizing isolated Arabic sign language gestures in a user-independent mode. The proposed method requires that signers wear gloves to simplify the process of segmenting out the hands of the signer via color segmentation. The consecutive frame differences of the segmented signing hands are then thresholded and accumulated into two static images that preserve the motion information. Special accumulation strategy is employed to maintain the directionality of the projected motion. To filter out any other irrelevant source of motion in the resulting images we encapsulate the movements of the segmented hands in a bounding box. Bounded images are then transformed into the frequency domain using Discrete Cosine Transformation followed by zonal coding to form the feature vectors. The effectiveness of the proposed user-independent feature extraction scheme is assessed by two different classification techniques; namely, KNN and polynomial networks. ©2007 IEEE.