Paper Title
Video Classification and Similar Videos Recommendation System

Abstract
This work's main goal is to categorize videos, by extracting video frames and generating feature vectors to identify action sequences. For improved spatial and temporal properties, a suggested deep neural network integrates Long Short Term Memory and Convolutional Neural Network models. By removing characteristics from frames and taking into account variables like frame width, height, and video sequence duration, video categorization is accomplished. Beginning with a time-distributed 2D convolutional layer, Following the initial convolutional layer, time-distributed max-pooling layers and dropout layers are strategically inserted. Additional convolutional layers with increasing filter sizes further enhance the model's capacity for feature extraction. A flatten layer prepares the output for temporal analysis by a Long Short-Term Memory (LSTM) layer, which captures intricate temporal dependencies within the video sequences. The architecture culminates in a dense layer equipped with a softmax activation function, generating predictions for various classes. The ISRO Dataset is used for video classification. The results show that in terms of prediction accuracy, the LRCN methodology performs better than both ConvLSTM and conventional CNN methods. Furthermore, the proposed method offers improved temporal and spatial stream identification accuracy. The findings show that a 67% prediction of the action sequence's probability is made. Along with Classification, we also Recommend few videos based upon the input videos or images. Keywords - Convolutional Neural Network (CNN), Long-term Recurrent Convolutional Network (LRCN), Long Short-Term Memory (LSTM).