Semiautomatic Mapping of Center Pivot Irrigated Areas Using Sentinel-2 Images and GEOBIA Approach

Image analysis and feature extraction of remoted sensing data are significant for mapping irrigated agriculture areas as a source of information to improve water management and agricultural planning. This paper presents an image segmented base approach GEOBIA (Geographic Object-Based Image Analysis) to extract irrigated areas by Center Pivot Irrigation System (CPIS). This study suggests a semi-automated recognition of circular patterns for the mapping of irrigated regions by center pivots, using Sentinel-2 MSI images, 10 meters spatial resolution. A set of images from different seasons, humid and dry are used to maximize de CPIS’s occurrence. A multiresolution segmentation method was applied, and a large number of segment-based shape features was extracted and used as input to a feature selection procedure (shape descriptors: Area; Compactness; Circularity Factor; Length/Width; Radius of smallest enclosing ellipse; and Roundness). In addition, another shape descriptor “Circularity Factor” was developed in this research and played an important role during preliminaries classification processes. The accuracy assessment of preliminaries classifications has validated used the Circularity Factor together with the other chosen shape descriptors to reach better results to CPIS’s detection. Furthermore, 86.23% of the CPIS mapped in the classification process is in accordance with the ground truth map. This methodology can be used to map large areas in a relatively short time and provides a tool for monitoring irrigated areas.


Introduction
Irrigation is one of the oldest techniques used by man for agricultural production, being practiced since the ancient civilizations.It was developed in arid regions, such as Egypt and Mesopotamia, and it is characterized by the set of techniques and equipment used to supply all or part of the water demand by crops (ANA 2019).The use of irrigation has several advantages for agricultural production and can increase crop productivity by up to three times compared to rained cultivation (Zonta et al. 2015).In Brazil, irrigation is responsible for 55% of the water flow removed and 75% of the water flow consumed (ANA 2019).However, the indiscriminate use of water can generate conflicts and cause profound environmental damage, often irreparable at the local level (Wenger, Vadjunec & Fagin 2017), such as erosion and loss of arable soils, pollution, salinization and eutrophication of water bodies, and the depletion of underground water reserves (Pereira Júnior, Ferreira & Miziara 2017).
The mappings of irrigated agriculture have used orbital remote sensing images as a source of information in the location and quantification of areas (Bégué et al. 2018), especially for those equipped with center pivot irrigation system (CPIS).The CPIS, due to its mode of operation, gives circular or semicircular geometric shape to the area where it is being used, which favors its identification in satellite images.However, the existing research on this topic has mostly used the technique of visual interpretation and manual digitization of this irrigated areas (ANA 2019;Ferreira et al. 2011;Lima et al. 2015;Martins et al. 2016;Sano et al. 2005;Schmidt et al. 2004), which consumes a lot of time and work.So, it would be impossible to use this approach in large areas.
Few studies have used techniques in which they have succeeded in partial or total automation in the recognition of circular patterns for the mapping of areas irrigated by center pivots, as is the case of the work carried out by Albuquerque et al. (2020), Maranha (2018), Saraiva et al. (2020), andZhang et al. (2018).
In order to overcome the existing limitations in traditional methods of image classification based only on pixel spectral information, the Geographic Object-Based Image Analysis (GEOBIA) approach interprets contiguous groups of pixels (objects), which are merged based on spectral and shape similarities.This approach is less sensitive to spectral confusion as it uses the full structural parameters of an image, incorporating information about color, hue, texture, pattern, shape, shadow, context, and size (Blaschke et al. 2014).
The GEOBIA approach has been used in the mapping of irrigated agriculture or not, in several regions of the globe.Vogels et al. (2019a) and Vogels et al. (2019b) used this approach to map small farms using irrigation in Africa.Ozelkan, Chenb and Ustundag (2016) made the monitoring and comparisons of irrigated and non-irrigated areas using the GEOBIA approach and Landsat-8 images.Santos et al. (2019) made the classification of different annual crops, pastures and perennial crops in the center of the State of São Paulo, Brazil.Despite the most varied applications of GEOBIA, there are still no studies that have used this approach to detect areas irrigated by central pivots.This paper presents an object-oriented assessment approach that was optimized for detection and classification of irrigated areas by central pivots, using multiresolution segmentation and shape descriptors in Sentinel-2 images with 10 meters of spatial resolution.

Methodology and Data
The methodology used in this study included the following steps: 1) definition of the study area with a high concentration of CPIS; 2) acquisition of Sentinel-2/MSI images with 10 meters of spatial resolution, distributed in the drying and rainy season trying to maximize the CPIS occurrence; 3) definition of the ground truth CPIS dataset; 4) performing multiresolution segmentation tests in the training area to define the segmentation parameters that allow to isolate the CPIS; 5) selection of shape descriptors and parameters to CPIS classification; 6) performing image processing through the use of the multiresolution segmentation parameters and the shape descriptors to CPIS classification in the test area, and; 7) accuracy analysis.This study was developed in the eCognition® Developer 8.8 software in a computer equipped with AMD Radeon 2GB GPU memory, 12 GB RAM, and an Intel Core i5-3337U CPU processor with a 1.80GHz processing speed.

Study Area
The study area is located between the states of São Paulo (SP) and Minas Gerais (MG), in Brazil, more specifically between the points of geographic coordinates 20º24 '30.974" S, 48º48'41.205" W and 19º55'47.212" S, 48º8'43.228"W. This area covered approximately 3,657.2 km². Figure 1 shows the study area.
The study area presents a high concentration of irrigation systems and the CPIS are the most used (ANA 2019).The region's climate is classified as Aw by the Köppen-Geiger classification (Rolim et al. 2007), being Anu.Inst.Geociênc., 2023;46:57987 that the months between April and October, the period of less pluviometry precipitation, requiring the need for additional water by irrigation to supply the cultivation.

Image Dataset
A time series composed by nine Sentinel-2 images, distributed between May/2020 to May/2021.The band of Blue (490 nm), Green (560 nm), Red (665 nm) and NIR (842 nm), with 10 meters of spatial resolution were used to carry out this research.The selected images have less of 5% cloud cover over the study area.The Table 1 present the acquisition date and the position of images selected.
Distributed images along the year, covering the rainy and dry seasons are fundamental to detect CPIS due the dynamic of cultivation phenological stages.Albuquerque et al. ( 2020) highlights the importance of using images of both the rainy season and the dry season in the identification of CPIS, however, it is such a problem to obtain many orbital images in the rainy season due to the presence of clouds.During the rainy season, the images appear with a high percentage of photosynthetically active vegetation, which can cause difficulties in isolating the CPIS in the segmentation process.On the other hand, in the dry season, irrigated areas stand out with adequate distinction in relation to non-irrigated areas, making it easier to isolate CPIS in segmentation process.In this sense, the use of time series containing as many images as possible is decisive to detect CPIS.The Figure 2 shows the difference between image taken at the end of the rainy season and image taken during the dry season.

Training and Test Data
The study area was divided into two parts, the first for training (70%) and the second for methodology validation (30%).In the training area, through visual inspection, a specialist identified and mapped 448 CPIS units, which were considered as ground truth map.In the training area, the multiresolution segmentation was performed to isolate the image object of CPIS.This was the method used to generate the shape parameters used in the CPIS classification process.
In the validation area 115 CPIS were identified and mapped as ground truth and used to accuracy assessment.The Figure 3 shows the areas of training and validation.

Multiresolution Segmentation
Proposed by Baatz and Schape (2000), multiresolution segmentation is a fundamental step for the generation of objects/segments that are the basic processing units used in a GEOBIA approach.Object-based image analysis has significantly superior advantages to pixel-based methods because they make use of statistical calculations, textural, shape parameters, topological relations, in addition to having a close relationship between image-objects and real-world objects (Benz et al. 2004).The objects resulting from this segmentation are based on spatial (shape) and spectral (digital value) parameters and can also be regrouped into larger objects by means of fusion, giving rise to the super-objects or regions.The generation of super-objects creates a hierarchical network that has relations with the segments that formed it, making each object know its context, its neighborhood, its subobjects and superobjects, which favors precise analysis within a specific region.The construction of the hierarchical network using various levels of segmentation allows the extraction of objects of interest represented at each level.In this way it is important that the segmentation is carried out in as many levels (resolutions) as necessary to separate the objects of interest, thus that they can be recognized and extracted.Figure 4 shows an example of a hierarchical network created by multiresolution segmentation.
In the training area, multiresolution segmentation tests were performed to isolate the image-objects of interest (CPIS).The tests involved segmentations in various resolutions (Scale parameter) and variations of the weights of Shape and Compactness.The evaluation  2.

Shape Descriptors and Classification
Once the parameters of the multiresolution segmentation were defined, the choice of shape descriptors and their parameters for classification of the CPIS began.Samples of image-objects created in the multiresolution segmentation step corresponding to the CPIS and Non-CPIS classes were manually classified, with the objective of comparing the potential differentiation of the classes using the various shape descriptors existing in the eCognition® Developer 8.8 software together with the "Circularity Factor" shape descriptor described by Maranha (2018).The shape descriptors used in this study are based on the geometry of the image-objects and are calculated based on the pixels that form them (Trimble 2012).
The Circularity Factor describes the relationship between the area of the image-object i and the area of a theoretical circle, whose radius is derived from the length of the image-object i.It is calculated by the Equation 1:  The following shape descriptors were selected for the classification of the CPIS: 1) Area; 2) Compactness; 3) Circularity Factor; 4) Length/Width; 5) Radius of smallest enclosing ellipse, and 6) Roundness.The parameters used for the classification of CPIS are presented in the Table 3.

Image Processing
The image processing stage was carried out by the joint and sequential use of multiresolution segmentation and classification, to extract as much information of interest as possible from the orbital images, according to the scheme presented in the Figure 5.
For each level, a sequence of nine segmentations and classifications was performed.This process occurred as follows: (i) starting at the first level, multiresolution segmentation was performed using only the bands corresponding to the first image of the time series (05/29/20), which resulted in the generation of a certain number of image-objects of interest, which were classified   using the shape descriptors presented; (ii) then, based on the already existing segmentation and ignoring the imageobjects already classified, a re-segmentation was performed within the same level (same scale parameter), however, using the bands of the second image of the time series (08/12/20).Thus, other image-objects were created and afterwards classified.The process described in (ii) was repeated for the other images of the time series.After the classification of the ninth image of the time series, at the first level of segmentation (Scale parameter = 1000), the unclassified image-objects were re-segmented with a new scale parameter (800), giving rise to a lower hierarchical level, which went through the same segmentation and classification sequences.This process occurred up to the fifth level, where the scale parameter was equal to 200.

Accuracy Assessment
We evaluate the performance using the metrics commonly used for object detection: Total accuracy, precision, recall, F1-score and IoU (Albuquerque et al. 2020;Congalton, Oderwald & Mead 1983;Everingham et al. 2010;Foody 2002;Ge, Wang & Liu 2007;Russakovsky et al. 2015;Saraiva et al. 2020;Stehman & Wickham 2011;Ye et al. 2018).This metrics were performed using object-wise computation, i.e., every object in the final classification level in the validation set was classified as TP, TN, FP or FN.The equations for accuracy metrics are listed in Table 4.

Results
In the training stage, of the 448 CPIS mapped as ground truth, 330 could be isolated in the multiresolution segmentation process, representing a recovery rate of 73.66% of the CPIS.The image-objects of the 330 CPIS were used to extract the classification parameters and used in the test phase.In the validation stage of the methodology, of the 115 CPIS mapped as ground truth, 100 were correctly classified.The Table 5 shows the metrics used to assess the results and the Table 6 shows the confusion matrix.The Figure 6 shows in detail part of the validation area and the classification results.
The summary of the results of the image processing is shown in Table 7.The results presented for each level of multiresolution segmentation are: i) number of imageobjects created; ii) classified; iii) correctly classified; iv) erroneously classified; v) classification accuracy at each level (Relative Accuracy) and vi) Overall Accuracy.The values presented correspond to the total number of imageobjects obtained at the end of each segmentation level.

Table 4
Summary of accuracy metrics used in the object detection, where TP is true positive, TN is true negative, FP is false positive, FN is false negative, |C| is the sum of the areas of the classified objects, |R| is the sum of the areas of the ground truth objects and, | C Ç R| is the sum of the intersection areas between the classified objects and the ground truth objects.

Accuracy Metric Equation
Total Accuracy (TA)

Discussion
The study performed by means of GEOBIA approach seems effective to detect CPIS, considering the possibility of mapping those CPIS combining multiresolution segmentation and classification based only in geometric characteristics, in the other words, shape descriptors of the interest objects.This approach brought a significant improvement to a faster CPIS identification in comparison to visual interpretation mapping.The most of CPIS inventory cases were based on visual interpretation of satellite images.The first work in this sense was developed by Rundquist et al. (1989), that used images from the MSS and TM sensors of the Landsat-1 satellite to inventory, annually, the (areas irrigated) irrigated areas by center pivots in the state of Nebraska in the USA.In Brazil other studies were developed using visual identification, such as those developed by ANA (2019), Braga andOliveira (2005), Ferreira et al. (2011), Guimarães andLandau (2011), Lima et al. (2015), Martins et al. (2016), Sano et al. (2005), and Schmidt et al. (2004).
The promising methodology conducted in the study area have best results in the CPIS identification, with Accuracy over 99%, Precision over 94% and Recall quite 87%.In terms of accuracy, precision, recall and F1-score, our results are consistent with the results obtained by other works that used some segmentation technique (Albuquerque et al. 2020;Carvalho et al. 2021;Graf et al. 2020;Maranha 2018;Saraiva et al. 2020).The high values of F1-score and Kappa index indicate the satisfying performance of the methodology in identifying and isolating objects of interest.The IoU metric (Jaccard index) evaluates performance in image segmentation tasks, including semantic segmentation and object detection, in which it measures the region of coincidence between the segmentation result and the ground truth (Ge, Wang & Liu 2007).In this study, the value of IoU = 0.8623 indicates that 86.23% of the area of the CPIS mapped in the classification is overlapped (intersecting) on the ground truth map.

Conclusion
The basic purpose of this study was to evaluate a semiautomatic methodology based in GEOBIA approach to detect and map center pivot irrigated areas.It was also possible to search well-defined geometric circular shape in Sentinel-2 images, with 10 meters of spatial resolution in classification process.This study has allowed to map different sizes of CPIS with diverse land cover and with different stages of crop growth.The use of temporal series is one of the most important things to reach good results due the variability of phenological stages of annual crop cultivation, e.g., soybean, corn, wheat, beans, and others.The shape descriptors used to classify the CPIS, i.e., Area (pixels); compactness; Circularity Factor; Length/Width; Radius of smallest enclosing ellipse; and Roundness, are powerful tools in the recognition of circular patterns in an object-based analysis approach.
The shape descriptors are quite important to detect CPIS, considering their peculiar patterns.The combination of different shape descriptors has reached best results.The Circularity Factor descriptor developed in this study played a significant role during preliminaries classification processes.The accuracy assessment of preliminaries classifications has validated used the Circularity Factor together with the other chosen descriptors to reach better results to CPIs detection.The method suggested in this paper can be applied in other areas, requiring only small adjustments in the parameters classification, specifically in the descriptor "Area", since the other descriptors tend to be invariant in the size of the objects, being more related to the shape.This methodology can be used to map large areas in a relatively short time and provides a tool for monitoring irrigated areas.This sort of knowledge is very useful to agrarian issues, water management, energy consumption, and land use planning.Future studies must be conducted in order to improve this study mainly in areas situated in regions with high concentration of CPIS.

Figure 1
Figure 1 Location map of the study area.

Figure 2
Figure 2 Sentinel-2 images from the two different periods: A. Final of rainy period (May 2020) and zoomed areas (A1 and A2); B. Dry period (September 2020) and zoomed areas (B1 and B2).The red areas represent the photosynthetically active regions.

Figure 3
Figure 3 Training and Validation area and CPIS ground truth mapping.
Circularity factor = image-object area / theoretical circle area (of pixels that make up the image-object i ; v γ = Length/Width ratio of image-object v .

Figure 4
Figure 4 Multiresolution segmentation and hierarchical net between levels of segmentation.

Figure 5
Figure 5 Multiresolution segmentation and classification sequence process beginning at scale parameter 1000 and finishing at scale parameter 200.

Figure 6
Figure 6 Detail of validation area showing classification result.

Table 1
Image number, acquision date and tile of Sentinel-2 MSI images.

Table 3
Shape parameters used to CPIS classification.

Table 5
Accuracy metrics calculated using an object-wise analysis.

Table 6
Confusion matrix of CPIS classification.

Table 7
Summary of results of image processing.