Paper Title
Contrivance of Recognised Hand Gestures into Voice and Text Output

Communication between mute and normal people has always been very difficult. Mute people use the hand gestures as a sign language to convey their feelings to normal people. Sign language (hand gesture) varies from person to person and place to place. It is difficult for normal people to understand these hand gestures without learning them. To overcome this difficulty we proposed a model which uses finger counts as the hand gestures and converts them into voice and textual information. The model contains two techniques to perform the hand gesture recognition. In first technique the hand gestures are captured and analysed using an algorithm written in MATLAB for image processing. The second technique is a real-time algorithm, which is used to convert the hand gestures directly into voice and textual information without capturing and storing any image i.e. it takes video as an input in real-time. The above mentioned techniques have been performed and compared to design a system which can give an accurate output. From the analysis performed it was found that both the techniques can be used to give an accurate output, if provided with good lighting conditions and even background. The proposed model does not require any data set and hence it reduces the memory used and also reduces the complexity of the system. The main advantage of this system is that it does not perform any comparison with the data-set hence it is not user specific and shows less error as compared to data set based techniques. Index Terms - Hand gesture recognition, Real-time algorithm, Data-set.