Enhancing Underwater Imagery Using Generative Adversarial Networks: A Deep Sea Vision Approach | IJET – Volume 11 Issue 6 | IJET-V11I6P16

International Journal of Engineering and Techniques (IJET) Logo

International Journal of Engineering and Techniques (IJET)

Open Access β€’ Peer Reviewed β€’ High Citation & Impact Factor β€’ ISSN: 2395-1303

Volume 11, Issue 6  |  Published: November 2025

Author:T Swath, Kurri Sai Priya, Kasaram Bhanuja, Manda Deepika, Malreddy Anusha, Mallela Sruthi Reddy

Abstract

Underwater image enhancement is a challenging task due to factors such as light absorption, scattering, and color distortion, which degrade image quality and visibility. This project proposes an advanced image enhancement approach using Generative Adversarial Networks (GANs) to address these challenges effectively. The study explores and implements various GAN-based architectures, including Deep Convolutional GANs (DCGANs) and Conditional GANs (cGANs), for restoring natural colors, enhancing contrast, and improving overall image clarity. By leveraging the generator–discriminator framework of GANs, the model learns to reconstruct visually appealing underwater images that closely resemble their natural appearance. Comprehensive experimentation and evaluation will be conducted using standard underwater image datasets and objective metrics such as PSNR, SSIM, and color accuracy. The proposed method aims to outperform traditional image enhancement techniques in both visual quality and computational efficiency. This research contributes to the growing field of underwater image processing, offering potential applications in marine biology, underwater robotics, environmental monitoring, and ocean exploration. The findings demonstrate the feasibility and effectiveness of GAN-based approaches for robust underwater image enhancement.

Keywords

GAN, deep sea vision, image enhancement, underwater image

Conclusion

GAN is a new field hence there’s lot to explore and learn. As an unsupervised learning method, GANs is one of the most important research directions in deep learning. GAN, which rely on the internal confrontation between real data and models to achieve unsupervised learning, is just a glimmer of light for AIs self-learning ability. Throughout this project, we have demonstrated the effectiveness of GAN-based techniques in enhancing underwater images by learning from both clean and distorted data. By training the model on a diverse dataset and optimizing its architecture, we have achieved remarkable results in improving image clarity, restoring natural colors, and reducing noise artifacts. Our experimentation and evaluation have shown that our proposed method outperforms traditional image enhancement techniques, providing more robust and reliable results. Looking ahead, the insights gained from this project pave the way for further advancements in underwater imaging technology. Future research directions may include exploring novel GAN architectures, integrating additional sensor data for enhanced performance, and addressing specific challenges in different underwater environments. Overall, our project contributes to the ongoing efforts to unlock the full potential of underwater imaging, ultimately leading to better understanding and utilization of our planet’s underwater ecosystems.

References

[1]Liang Gong and Yimin Zhou, β€œA Review: Generative Adversarial Networks” in 2019 14th IEEE Conference on Industrial Electronics and Applications (ICIEA) [2]Yu Xinyu, β€œEmerging Applications of Generative Adversarial Networks” in IOP Conference Series: Material Science & Engineering (MEMA 2019) [3]Ahmad Al-qerem, Yasmeen Shaher Alsalman, Khalid Mansour, β€œImage Generation Using Different Models of Generative Adversarial Network in 2019. [4]Tero Karras, Samuli Laine, Timo Aila,β€œA Style-Based Generator Architecture for Generative Adversarial Networks” in 2019. [5]Piotr Teterwak, Aaron Sarna, Dilip Krishnan, Aaron Maschinot, David Belanger, Ce Liu, William T. Freeman, β€œBoundless: Generative Adversarial Networks for Image Extension” in 19 Aug 2019. [6]Huaming Liu,Guanming Lu,Xuehui Bi, Jingjie Yan, Weilan Wang , β€œImage Inpainting Based on Generative Adversarial Networks” in 2019. [7]Ming Li, Rui Xi, Beier Chen, Mengshu Hou, Daibo Liu, Lei Guo, β€œGenerate Desired Images from Trained Generative Adversarial Networks” in 2019. [8]Han Wang, Wei Wu and Yang Su, Yongsheng Duan and Pengze Wang β€œImage Super Resolution using an Improved Generative Adversarial Network” in 2019. [9]Nathanael Carraz Rakotonirina, Andry Rasoanaivo, β€œESRGAN+: Further Improving Enhanced Super Resolution Generative Adversarial Network” in 15 Jul 2020. [10]Yi Jiang, Jiajie Xu, Jing Xu, Et al., β€œImage Inpainting Based on Generative Adversarial Networks” in 2020. [11]Chaoyue Wang, Chang Xu, Xin Yao, Dacheng Tao, β€œEvolutionary Generative Adversarial Network” (IEEE 2018). [12]Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Johannes Totz, Zehan Wang, Wenzhe Shi. β€œPhoto-Realistic Single Image Super-Resolution Using a GAN”. [13]Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen. β€œProgressive growing of GANs for improved Quality, stability and variation”. [14]Zhaoqing Pan, Weijie Yu, Xiaokai Yi, Asifullah Khan, Feng Yuan, and Yuhui Zheng. β€œRecent progress on Generative Adversarial Networks: A survey” Yang-Jie Cao, Li-Li Jia, Yong-Xia Chen, Nan Lin. β€œRecent advances of Generative Adversarial Network in Computer Vision”.

Cite this article

APA
T Swath, Kurri Sai Priya, Kasaram Bhanuja, Manda Deepika, Malreddy Anusha, Mallela Sruthi Reddy (November 2025). Enhancing Underwater Imagery Using Generative Adversarial Networks: A Deep Sea Vision Approach. International Journal of Engineering and Techniques (IJET), 11(6). https://zenodo.org/records/17681642
T Swath, Kurri Sai Priya, Kasaram Bhanuja, Manda Deepika, Malreddy Anusha, Mallela Sruthi Reddy, β€œEnhancing Underwater Imagery Using Generative Adversarial Networks: A Deep Sea Vision Approach,” International Journal of Engineering and Techniques (IJET), vol. 11, no. 6, November 2025, doi: https://zenodo.org/records/17681642.
Submit Paper