Paper Title
SRCNN Architecture Optimization for High-Resolution Head-Mounted Displays

Abstract
The need for Virtual world is expanding in this rapid growing world. The applications of Augmented reality is expanding in various fields such as academics, business, entertainment and so on. The usage of head mounted displays for long time causes discomfort, eye fatigue and even giddiness. Blurred images and Lower resolution images caused due to hardware limitations or bandwidth constraints causes even more trouble. Therefore the low resolution images should be upgraded. SRCNN can be used to upscale low-resolution images and videos to high resolution. Implementing the Super-Resolution Convolutional Neural Network (SRCNN) in head-mounted displays (HMDs) can significantly enhance visual quality by providing high-resolution images in real-time. SRCNN is a deep learning model designed to perform single-image super-resolution, which involves increasing the resolution of an input image to enhance its quality and detail. The goal is to reconstruct a high-resolution (HR) image from a low-resolution (LR) one, making it useful in head mounted displays and virtual reality, where image clarity is crucial.VR headsets require real-time image enhancement for an immersive experience, but also need to minimize power consumption and heat dissipation. Real-time image enhancement must meet tight latency constraints (in the range of milliseconds) to avoid motion sickness. SRCNN typically consists of 3 convolutional layers. Patch extraction and representation, Non-linear mapping and Reconstruction. SRCNN can be parallelized across multiple processing elements (PEs) within the VLSI design to handle multiple pixels concurrently. This increases throughput and reduces the time required for real-time image super-resolution. Keywords - Super Resolution Convolutional Neural Network; VLSI; Head Mounted Display (HMD); Immersive Experience; Low Latency