River Segmentation Using Deep Neural Networks on Aerial Orthophoto

Authors

  • Abdul Hafiz Zakaria Department of Mechatronics Engineering, Kulliyyah of Engineering International Islamic University Malaysia, P.O Box, 50728, Kuala Lumpur, Malaysia
  • Yasir Mohd Mustafah Department of Mechatronics Engineering, Kulliyyah of Engineering International Islamic University Malaysia, P.O Box, 50728, Kuala Lumpur, Malaysia
  • Nor Rohaizah Jamil Faculty of Forestry and Environment, University Putra Malaysia, 43400, Selangor, Malaysia

DOI:

https://doi.org/10.15282/mekatronika.v7i1.12196

Keywords:

UAV, River, Segmentation, Deep Neural Networks, Remote sensing

Abstract

This study investigates the use of deep learning for automated river segmentation from UAV-captured aerial orthophotos, addressing the limitations of traditional and labor-intensive river monitoring techniques. We introduced, annotated dataset of high-resolution river imagery and evaluated multiple semantic deep neural network architectures, including U-Net, FPN, PSPNet, and LinkNet, using ResNet50 as backbone models. To optimize performance, various image patch sizes were tested, with 768×768 pixels providing the best trade-off between segmentation accuracy (88.76%) and computational efficiency. Among the tested models, U-Net with a ResNet50 backbone achieved the highest segmentation performance, with an average Intersection over Union (IoU) of 61%, an F1-score of 73%, a precision of 74%, and a recall of 77%. These findings demonstrate the potential of UAV-based remote sensing and deep learning for enhancing the accuracy and efficiency of river monitoring.

References

[1] A. Annis, F. Nardi, A. Petroselli, C. Apollonio E. Arcangeletti, F. Tauro, et al., “UAV-DEMs for small-scale flood hazard mapping,” Water (Switzerland), vol. 12, no. 6, p. 1717, 2020.

[2] S. Bianco, R. Cadene, L. Celona, and P. Napoletano, “Benchmark analysis of representative deep neural network architectures,” IEEE Access, vol. 6, pp. 64270–64277, 2018.

[3] M. R. Casado and P. Leinster, “Towards more effective strategies to reduce property level flood risk: Standardising the use of unmanned aerial vehicles,” Journal of Water Supply: Research and Technology - AQUA, vol. 69, no. 8, pp. 807–818, 2020.

[4] A. Chaurasia and E. Culurciello, “LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation,” 2017 IEEE Visual Communications and Image Processing (VCIP), pp. 1-4, 2017.

[5] B. T. Forbes, G. P. DeBenedetto, J. E. Dickinson, C. E. Bunch, and F. A. Fitzpatrick, “Using small unmanned aircraft systems for measuring post-flood high-water marks and streambed elevations,” Remote Sensing, vol. 12, no. 9, p. 1437, 2020.

[6] G. Fu, F. Meng, M. Rivas Casado, and R. S. Kalawsky, “Towards integrated flood risk and resilience management,” Water (Switzerland), vol. 12, no. 6, p. 1789, 2020.

[7] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016.

[8] M. R. Heffels and J. Vanschoren, “Aerial imagery pixel-level segmentation,” arXiv preprint arXiv:2012.02024, 2020.

[9] J Hu, L Li, Y Lin, F Wu, J Zhao, “A comparison and strategy of semantic segmentation on remote sensing images,” The International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery, pp. 21-29, 2019.

[10] M. La Salandra, G. Miniello, S. Nicotri, A. Italiano, G. Donvito, G. Maggi, et al., “Generating UAV high-resolution topographic data within a FOSS photogrammetric workflow using high-performance computing clusters,” International Journal of Applied Earth Observation and Geoinformation, vol. 105, p. 102600, 2021.

[11] T. Y Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117-2125. 2017

[12] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431-3440, 2015.

[13] L. Lopez-Fuentes, C. Rossi, and H. Skinnemoen, “River segmentation for flood monitoring,” 2017 IEEE International Conference on Big Data (Big Data), pp. 3746-3749, 2017.

[14] J. Lv, Q. Shen, M. Lv, Y. Li, L. Shi, and P. Zhang, “Deep learning-based semantic segmentation of remote sensing images: A review,” In Frontiers in Ecology and Evolution, vol. 11, p. 1201125, 2023.

[15] N. A. Muhadi, A. F. Abdullah, S. K. Bejo, M. R. Mahadi, and A. Mijic, “Image segmentation methods for flood monitoring system,” Water (Switzerland), vol. 12, no. 6, p. 1825, 2020.

[16] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234-241, 2015.

[17] G. Salmoral, M. Rivas Casado, M. Muthusamy, D. Butler, P. P. Menon, and P. Leinster, “Guidelines for the use of unmanned aerial systems in flood emergency response,” Water (Switzerland), vol. 12, no. 2, p. 521, 2020.

[18] Y. Watanabe and Y. Kawahara, “UAV photogrammetry for monitoring changes in river topography and vegetation,” Procedia Engineering, vol. 154, pp. 317–325, 2016.

[19] T. Z. Xiang, G. S. Xia, and L. Zhang, “Mini-unmanned aerial vehicle-based remote sensing: Techniques, applications, and prospects,” IEEE Geoscience and Remote Sensing Magazine, vol. 7, no. 3, pp. 29-63, 2018.

[20] X. Zhang, Y. Zhou, J. Jin, Y. Wang, M. Fan, N. Wang, et al., “Icenetv2: A fine-grained river ice semantic segmentation network based on UAV images,” Remote Sensing, vol. 13, no. 4, pp. 1–17, 2021.

[21] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2881-2890, 2017.

Downloads

Published

2025-06-10

Issue

Section

Research Article

How to Cite

[1]
A. H. Zakaria, Y. Mohd Mustafah, and N. R. Jamil, “River Segmentation Using Deep Neural Networks on Aerial Orthophoto”, Mekatronika : J. Intell. Manuf. Mechatron., vol. 7, no. 1, pp. 60–70, Jun. 2025, doi: 10.15282/mekatronika.v7i1.12196.

Similar Articles

11-20 of 39

You may also start an advanced similarity search for this article.