![]()
Detection of Authenticity of Images
Virginia Peter Gonsalves1, Tahira H. Shaikh2, Muskan A. Pathan3, Plasin Francis Dias4
1Virginia Peter Gonsalves, Student, Department of Electronics and Communication Engineering, KLS VDIT, Haliyal (Karnataka), India.
2Tahira H. Shaikh, Student, Department of Electronics and Communication Engineering, KLS VDIT, Haliyal (Karnataka), India.
3Muskan A. Pathan, Student, Department of Electronics and Communication Engineering, KLS VDIT, Haliyal (Karnataka), India.
4Prof. Plasin Francis Dias, Assistant Professor, Department of Electronics and Communication Engineering, KLS VDIT, Haliyal (Karnataka), India.
Manuscript received on 24 May 2025 | First Revised Manuscript received on 31 May 2025 | Second Revised Manuscript received on 18 September 2025 | Manuscript Accepted on 15 October 2025 | Manuscript published on 30 October 2025 | PP: 11-17 | Volume-15 Issue-1, October 2025 | Retrieval Number: 100.1/ijeat.E467414050625 | DOI: 10.35940/ijeat.E4674.15011025
Open Access | Editorial and Publishing Policies | Cite | Zenodo | OJS | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: The proliferation of AI-generated images, enabled by deep learning algorithms, has become a matter of concern about fake information, media manipulation, and the rapid decline in trust in visual content. In this study, we focus our efforts on developing a system that ensures the credibility of digital media through image verification. A custom Convolutional Neural Network (CNN) designed specifically for authentication detection, trained on a dataset comprising 5,392 authentic images and AI-generated images, was employed. The dataset was divided into three parts, including training (3,964), validation (714), and test (714). Data augmentation techniques were used to preprocess the dataset, which included rotation, flipping, and brightness adjustments, thereby creating a more versatile representation of various images. We also obtained the CNN architecture by using four convolutional blocks, which included batch normalisation, max pooling, and dropout layers, thereby preventing overfitting. Next, we followed these blocks with some dense layers that were correctly applied for binary classification. The model was successfully trained for 50 epochs using the Adam optimiser (with a learning rate of 0.0001) and binary cross-entropy loss. It also included callbacks for early stopping, model checkpointing, and reducing the learning rate. Interestingly, we successfully classified 93.56% of the test set as authentic, achieving a precision rate of 95.87%, a recall rate of 91.04%, and an AUC-ROC of 0.9754. This firmly states that the model achieves the discriminatory power we desire. A classification report reaffirmed the truth of balanced precision and recall for both authentic and fake classes. Furthermore, our study reveals the more detailed applications of data augmentation and batch normalisation as key features for achieving high success and accuracy. The result can not only be an answer to one of the threats of synthetic imagery but also can be executed in journalism, digital forensics, and social media moderation. This research addresses the issues related to synthetic imagery, thereby contributing to the further establishment of trust in visual media and reducing the risks associated with fake news.
Keywords: AUC-ROC, Convolutional Neural Network, Data augmentation, Normalization.
Scope of the Article: Image Analysis and Processing
