OPTIMAL DEEP NEURAL NETWORK PARAMETERS FOR POWER LOSS MINIMIZATION ANALYTICS
DOI:
https://doi.org/10.4314/njt.2025.4911Keywords:
DNN, Optimal Activation Function, Smart Grid, Power Loss, Model Performance Metrics, Parameter tuningAbstract
Abstract
Power loss remains a critical problem in power systems, leading to voltage instability, reduced transmission efficiency, equipment degradation, and significant financial losses for utility providers. These inefficiencies disrupt reliable electricity delivery, increase operational costs, and hinder the sustainability goals of smart grid infrastructure. Traditional analytical and optimization methods have often proven inadequate in accurately modeling the nonlinear and dynamic behaviors that characterize power networks. As a result, there is a growing need for advanced intelligent models capable of analyzing large-scale data, uncovering patterns, and predicting losses more effectively. This study addresses the power loss challenge by developing an optimized Deep Neural Network (DNN) framework for power loss minimization analytics in smart grids. Three separate power loss datasets were analyzed using the Orange data mining platform and the Python development environment. The DNN model was systematically tuned by optimizing key parameters such as activation functions, number of hidden layers, learning rates, and batch sizes. Among the configurations tested, the model employing the Rectified Linear Unit (ReLU) activation function with six hidden layers achieved the best performance. The optimized DNN produced a Mean Squared Error (MSE) of 1.0E-03, Root Mean Squared Error (RMSE) of 3.4E-02, Mean Absolute Error (MAE) of 1.9E-02, coefficient of determination (R²) of 0.94, and Mean Absolute Percentage Error (MAPE) of 4.8%. When compared with conventional models, the optimized DNN demonstrated a 20–25% improvement in predictive accuracy. These results confirm that optimizing DNN parameters significantly enhances power loss analytics in smart grid systems. The proposed model offers a robust and intelligent solution for minimizing losses, improving energy efficiency, and supporting informed decision-making in modern power networks.
References
[1] Agarwala, A., Tahsin, T., Ali, M. F., Sarker, S. K., Abhi, S. H., Das, S. K. and Ahamed, M. H., “Towards Next Generation Power Grid Transformer for Renewables: Technology Review,” Engineering Reports, 6(4), e12848, 2024.
[2] Olanite, O., Nwohu, M. N., Adegboye, B. A. and Tola, O. J., “Large Scale Penetration Impact of PMS-WTGS on Voltage Profile and Power Loss for 5-Bus, 330kV System of the Nigerian Grid,” Nigerian Journal of Technology, 43(1), pp. 115–122, 2024.
[3] Sadiq, E. H. and Antar, R. K., “Minimizing Power Losses in Distribution Networks: A Comprehensive Review,” Chinese Journal of Electrical Engineering, 10(4), pp. 20–36, 2024.
[4] Ghahramani, M., Habibi, D., Ghamari, S., Soleimani, H. and Aziz, A., “Renewable-Based Isolated Power Systems: A Review of Scalability, Reliability, and Uncertainty Modeling,” Clean Technologies, 7(3), p. 80, 2025.
[5] Strielkowski, W., Vlasov, A., Selivanov, K., Muraviev, K. and Shakhnov, V., “Prospects and Challenges of the Machine Learning and Data-Driven Methods for the Predictive Analysis of Power Systems: A Review,” Energies, 16(10), p. 4025, 2023.
[6] Douglass, F., “Deep Learning and Neural Networks: Methods and Applications,” Online Publication, 2023. DOI: 10.59646/csebookc8/004.
[7] Tiep, N. H., Jeong, H., Kim, K., Mung, N. X., Dao, N., Tran, H., Hoang, V., Anh, N. N. and Vu, M., “A New Hyperparameter Tuning Framework for Regression Tasks in Deep Neural Network: Combined-Sampling Algorithm to Search the Optimized Hyperparameters,” Mathematics, 12(24), p. 3892, 2024. DOI: 10.3390/math12243892.
[8] Al-Jamimi, H. A., BinMakhashen, G. M., Worku, M. Y. and Hassan, M. A., “Advancements in Household Load Forecasting: Deep Learning Model with Hyperparameter Optimization,” Electronics, 12(24), p. 4909, 2023. DOI: 10.3390/electronics12244909.
[9] Hammad, M. M., “Deep Learning Activation Functions: Fixed-Shape, Parametric, Adaptive, Stochastic, Miscellaneous, Non-Standard, Ensemble,” arXiv preprint arXiv:2407.11090, 2024.
[10] Inyang, U. G., Ijebu, F. F., Osang, F. B., Afoluronsho, A. A., Udoh, S. S. and Eyoh, I. J., “A Dataset-Driven Parameter Tuning Approach for Enhanced K-Nearest Neighbour Algorithm Performance,” International Journal of Advanced Science, Engineering and Information Technology, 13(1), pp. 380–388, 2023. DOI: 10.18517/ijaseit.13.1.16706.
[11] Al-Shetwi, A. Q., Hannan, M. A., Al-Masri, H. M. and Sujod, M. Z., “Latest Advancements in Smart Grid Technologies and Their Transformative Role in Shaping the Power Systems of Tomorrow: An Overview,” Progress in Energy, 2024.
[12] Kanimozhi, K., Neelaveni, P., Seethalakshmi, K., Rao, N., Prabhu, M. and Naganathan, S. B. T., “Implementing Real-Time Analytics for Enhanced Energy Efficiency in IoT-Integrated Smart Grid Systems,” Proceedings of the IEEE ICCSP Conference, 2024. DOI: 10.1109/iccsp60870.2024.10543583.
[13] Naik, M. A. and Singh, E. J., “Optimization of Deep Neural Networks Using DAPP (DNN Acceleration Using Ping-Pong) Approach,” International Journal for Research in Applied Science and Engineering Technology, 11(1), pp. 1099–1108, 2023. DOI: 10.22214/ijraset.2023.48771.
[14] Duong, H., Xu, D., Nguyen, T. and Dwyer, M. B., “Harnessing Neuron Stability to Improve DNN Verification,” Proceedings of the ACM on Software Engineering, 1(FSE), pp. 859–881, 2024.
[15] Edwin Singh, C. and Celestin Vigila, S. M., “WOA-DNN for Intelligent Intrusion Detection and Classification in MANET Services,” Intelligent Automation & Soft Computing, 35(2), pp. 1–10, 2023.
[16] Al Maruf, M., Azim, A., Auluck, N. and Sahi, M., “Optimizing DNN Training with Pipeline Model Parallelism for Enhanced Performance in Embedded Systems,” Journal of Parallel and Distributed Computing, 190, p. 104890, 2024.
[17] Nie, C., Maghakian, J. and Liu, Z., “Training DNN Models Over Heterogeneous Clusters with Optimal Performance,” arXiv preprint arXiv:2402.05302, 2024. DOI: 10.48550/arxiv.2402.05302.
[18] Aktemur, E., Zorlutuna, E., Bilgili, K., Bok, T. E., Yanikoglu, B. and Mutluergil, S. O., “Going Forward-Forward in Distributed Deep Learning,” arXiv preprint arXiv:2404.08573, 2024.
[19] Fristiana, A. H., Alfarozi, S. A. I., Permanasari, A. E., Pratama, M. and Wibirama, S., “A Survey on Hyperparameters Optimization of Deep Learning for Time Series Classification,” IEEE Acces, 2024.
[20] Li, X., Tai, X., Shi, H., Liu, L. and Xu, J., “DNN-Based Method to Solve Absolute Value Equations Derived from Signal Processing,” Proceedings of the IEEE ICMMT Conference, 2023. DOI: 10.1109/icmmt58241.2023.10277495.
[21] Ruan, J., Liang, G., Zhao, J., Zhao, H., Qiu, J., Wen, F. and Dong, Z. Y., “Deep Learning for Cybersecurity in Smart Grids: Review and Perspectives,” Energy Conversion and Economics, 4(4), pp. 233–251, 2023.
[22] Malakouti, S. M., Menhaj, M. B. and Suratgar, A. A., “Applying Grid Search, Random Search, Bayesian Optimization, Genetic Algorithm, and Particle Swarm Optimization to Fine-Tune the Hyperparameters of the Ensemble of ML Models Enhances Its Predictive Accuracy for Mud Loss,” Engineering Reports, 2024.
[23] Çatalbaş, B. and Morgül, Ö., “Deep Learning with Extended Exponential Linear Unit (DELU),” Neural Computing and Applications, 35(30), pp. 22705–22724, 2023.
[24] Waqas, N., Islam, M., Yahya, M., Habib, S., Aloraini, M. and Khan, S., “Med-ReLU: A Parameter-Free Hybrid Activation Function for Deep Artificial Neural Network Used in Medical Image Segmentation,” Computers, Materials & Continua, 84(2), pp. 1–10, 2025.
[25] Domingues, I. and Rasteiro, D. M. L. D., “A Paradigm Shift in Teaching Machine Learning to Sustainable City Management Master Students,” Proceedings of the 16th International Conference on Education Technology and Computers, pp. 413–417, 2024.
[26] Venkatesh, A. P. S., Wang, J., Li, L. and Bodden, E., “Enhancing Comprehension and Navigation in Jupyter Notebooks with Static Analysis,” Proceedings of the IEEE SANER Conference, 2023. DOI: 10.1109/saner56733.2023.00044.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Nigerian Journal of Technology

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The contents of the articles are the sole opinion of the author(s) and not of NIJOTECH.
NIJOTECH allows open access for distribution of the published articles in any media so long as whole (not part) of articles are distributed.
A copyright and statement of originality documents will need to be filled out clearly and signed prior to publication of an accepted article. The Copyright form can be downloaded from http://nijotech.com/downloads/COPYRIGHT%20FORM.pdf while the Statement of Originality is in http://nijotech.com/downloads/Statement%20of%20Originality.pdf
For articles that were developed from funded research, a clear acknowledgement of such support should be mentioned in the article with relevant references. Authors are expected to provide complete information on the sponsorship and intellectual property rights of the article together with all exceptions.
It is forbidden to publish the same research report in more than one journal.

