An Efficient Video Compression Framework using Deep Convolutional Neural Networks (DCNN)
- 1 Department of Computer Science and Engineering, R.V.R and J.C College of Engineering, Andhra Pradesh, India
- 2 Department of AI and ML, The Oxford College of Engineering, Karnataka, India
- 3 Department of Computer Science and Engineering-AI, Faculty of Engineering, Jain Deemed to be University, Karnataka, India
Abstract
In the current world, video streaming has grown in popularity and now accounts for a large percentage of internet traffic, making it challenging for service providers to broadcast videos at high rates while utilizing less storage space. To follow inefficient analytical coding design, previous video compression prototypes require non-learning-based designs. As a result, we propose a DCNN technique that integrates OFE-Net, MVE-Net, MVD-Net, MC-Net, RE-Net, and RD-Net for getting an ideal collection of frames by linking each frame pixel with preceding and following frames, then finding linked blocks and minimizing un needed pixels. In terms of MS-SIM and PSNR, the proposed DCNN approach produces good video quality at low bit rates.
DOI: https://doi.org/10.3844/jcssp.2022.589.598
Copyright: © 2022 Kommerla Siva Kumar, P. Bindhu Madhavi and K. Janaki. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
- 2,508 Views
- 1,106 Downloads
- 0 Citations
Download
Keywords
- Deep Neural Networks
- Encoding
- Decoding
- Video Compression