Existing work on Arabic Sign Language recognition focuses on finger spelling and isolated gestures. In this work we extend vision-based existing solutions to recognition of continuous signing. As such we have collected and labeled the first video-based continuous Arabic Sign Language dataset. We intend to make the collected dataset available for the research community. The proposed solution extracts the motion from the video-based sentences by means of thresholding the forward prediction error between consecutive images. Such prediction errors are then transformed into the frequency domain and Zonal coded. We use Hidden Markov Models for model training and classification. The experimental results show an average word recognition rate of 94%, keeping in the mind the use of a high perplexity vocabulary and unrestrictive grammar. ©2008 IEEE.