Development of a Low-Cost Terrestrial Mobile Mapping System for Urban Vegetation Detection Using Convolutional Neural Networks
DOI:
https://doi.org/10.11137/1982-3908_2022_45_46008Palavras-chave:
Mobile geospatial data acquisition systems, NIR Imaging, Semantic segmentationResumo
Urbanization brought a lot of pollution-related issues that are mitigable by the presence of urban vegetation. Therefore, it is necessary to map vegetation in urban areas, to assist the planning and implementation of public policies. As a technology presented in the last decades, the so-called Terrestrial Mobile Mapping Systems - TMMS, are capable of providing cost and time effective data acquisition, they are composed primarily by a Navigation System and an Imaging System, both mounted on a rigid platform, attachable to the top of
a ground vehicle. In this context, it is proposed the creation of a low-cost TMMS, which has the feature of imaging in the near-infrared (NIR) where the vegetation is highly discriminable. After the image acquisition step, it becomes necessary for the semantic segmentation of vegetation and non-vegetation. The current state of the art algorithms in semantic segmentation scope are the Convolutional Neural Networks - CNNs. In this study, CNNs were trained and tested, reaching a mean value of 83% for the Intersection Over Union (IoU) indicator. From the results obtained, which demonstrated good performance for the trained neural network, it is possible to conclude
that the developed TMMS is suitable to capture data regarding urban vegetation.
Referências
Badrinarayanan, V., Kendall, A. & Cipolla, R. 2017, ‘Segnet: A deep convolutional encoder-decoder architecture for image segmentation’, IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 12, pp. 2481-95. https://doi.org/10.1109/TPAMI.2016.2644615
Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K. & Yuille, A.L. 2018, ‘Deep Lab: Semantic image segmentation with deep convolutional nets and fully connected CRFS’, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834-48. https://doi.org/10.1109/TPAMI.2017.2699184
El-Sheimy, N. 2005, ‘An overview of mobile mapping systems’, Proceedings of the FIG Working Week, pp. 16-21.
Fawcett, T. 2006, ‘An introduction to ROC analysis’, Pattern recognition letters, vol. 27, no. 8, pp. 861-74. https://doi.org/10.1016/j.patrec.2005.10.010
Kannojia, S.P. & Jaiswal, G. 2018, ‘Effects of varying resolution on performance of CNN based image classification: An experimental study’, International Journal of Computer Sciences and Engineering, vol. 6, no. 9, pp. 451-6. http://dx.doi.org/10.26438/ijcse/v6i9.451456
Mikołajczyk, A. & Grochowski, M. 2018, ‘Data augmentation for improving deep learning in image classification problem’, 2018 International Interdisciplinary PhD workshop (IIPhDW), pp. 117-22, IEEE. https://doi.org/10.1109/IIPHDW.2018.8388338
Mostajabi, M., Yadollahpour, P. & Shakhnarovich, G. 2015, ‘Feedforward semantic segmentation with zoom-out features’, Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3376-85. https://doi.org/10.48550/arXiv.1412.0774
Myneni, R.B., Hall, F.G., Sellers, P.J. & Marshak, A.L. 1995, ‘The interpretation of spectral vegetation indexes’, IEEE Transactions on Geoscience and Remote Sensing, vol. 33, no. 2, pp. 481-6. https://doi.org/10.1109/TGRS.1995.8746029
Nicodemo, M.L.F. & Primavesi, O. 2009, ‘Por que manter árvores na área urbana?’, Embrapa Pecuária Sudeste-Documentos (INFOTECA-E).
Pohlen, T., Hermans, A., Mathias, M. & Leibe, B. 2017, ‘Full-resolution residual networks for semantic segmentation in street scenes’, Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4151-60. https://doi.org/10.48550/arXiv.1611.08323
Sabottke, C.F. & Spieler, B.M. 2020, ‘The effect of image resolution on deep learning in radiography’, Radiology: Artificial Intelligence, vol. 2, no. 1, pp. e190015. https://doi.org/10.1148/ryai.2019190015
Downloads
Arquivos adicionais
Publicado
Edição
Seção
Licença
Os artigos publicados nesta revista se encontram sob a llicença Creative Commons — Atribuição 4.0 Internacional — CC BY 4.0, que permite o uso, distribuição e reprodução em qualquer meio, contanto que o trabalho original seja devidamente citado.