مهدی Amiri; Farzad Amiri; Mohammad Hossein Pourasad; Seyfollah Soleimani
Abstract
Clean air quality, as one of the most essential needs of living organisms, has been compromised by natural and artificial activities. Dust storms have been constantly increasing in recent years, which have resulted in countless social, economic and environmental health damages for residents of southern ...
Read More
Clean air quality, as one of the most essential needs of living organisms, has been compromised by natural and artificial activities. Dust storms have been constantly increasing in recent years, which have resulted in countless social, economic and environmental health damages for residents of southern and southwestern regions of Iran. In this study, (MODIS) sensor data are used to investigate dust storms and detect horizontal depth of field. The advantages of MODIS sensor data include high spectral and temporal resolution. In addition, data from meteorological stations are collected according to the study period. After pre-processing data and preparing field observations, features required for modeling are derived from the MODIS sensor data through a differential method between the selected bands of each MODIS sensor image along with the features extracted from the sensor. Ground meteorological stations are used. With further studies and evaluations and using the opinions of meteorological experts 42 features are used of which36 are extracted from different bands of Moody's images and 6 features are extracted from meteorological station data. Next, best features are identified through feature selection techniques and a new method called ML-Based GMDH. which is the result of improving the GMDH neural network by changing partial functions with machine learning models, was used to detect dust concentration and horizontal visibility. In addition, to achieve the appropriate accuracy, the meta-parameters of this model are adjusted by the TLBO optimization algorithm. Furthermore, basic GMDH machine learning methods SVM, MLP, MLR, RF and their group model are implemented to compare with the main approach. Results shows that the ML-Based GMDH method adjusted with TLBO by improving on the best methods. The machine learner provides good accuracy in detecting dust concentrations.
Ali shamsoddini; Bahar Asadi
Abstract
Identifying and mapping crops provides important information for managing agricultural lands and estimating the area under cultivation of crops. This study investigates the importance of red edge bands for segregation of crops including wheat, barley, alfalfa, beans, broad beans, flax, corn, sugar beet ...
Read More
Identifying and mapping crops provides important information for managing agricultural lands and estimating the area under cultivation of crops. This study investigates the importance of red edge bands for segregation of crops including wheat, barley, alfalfa, beans, broad beans, flax, corn, sugar beet and potatoes using random forest method and support vector machine. For this purpose, the time series of Sentinel-1 and 2 images in 2019 from the northwest of Ardabil was called in the Google Earth Engine (GEE) platform. In order to study the performance of spectral and temporal information, plant indices and backscatter information on the crop mapping, different band combinations were examined. Using the random forest feature selection method, important features were identified and introduced as the input of the random forest and the support vector machine classifiers. Random forest provided the best results for all scenarios. The results showed that the addition of red edge wavelengths and red edge-based vegetation indices are more useful than other bands and vegetation indices for mapping barley, beans, broad beans and flax. The best result among different combinations of features was related to the time series of spectral features of Sentinel-2 images fused with the time series of Sentinel-1 images for that the overall accuracy and the kappa coefficient were 84.67% and 82.31%, respectively. Moreover, the results showed that red edge bands and red edge-based vegetation indices are efficient to identify crops from each other.
Rasoul Atashi Deligani; Mina Moradizadeh; Behnam Tashayo
Abstract
Ground surface ozone is one of the most dangerous pollutants that has significant harmful effects on the residents of urban areas. The purpose of this study is to identify the factors affecting ozone concentration and modeling its changes using satellite data and different machine learning methods in ...
Read More
Ground surface ozone is one of the most dangerous pollutants that has significant harmful effects on the residents of urban areas. The purpose of this study is to identify the factors affecting ozone concentration and modeling its changes using satellite data and different machine learning methods in Tehran. For this purpose, pollutant concentration and meteorological data were used along with the satellite product of land surface temperature (LST) in the period from 2015 to 2021. After calculating the correlation between ozone concentration and independent parameters, ozone concentration modeling was done in five different modes in terms of input parameters and learning method and applying data refinement. In the first and second mode, modeling was done using pollutant concentration and meteorological data through multivariate linear regression method. The only difference between these two modes is the filtering of the input data using the WTEST method in the second mode. In the third mode, the LST product was added to the input data, and in the fourth and fifth mode, ozone modeling was done using multilayer neural network and recurrent neural network, respectively. The comparison of the five modes showed that the modeling of the first to fifth stages with adjusted coefficient of determination of 0.5, 0.64, 0.69, 0.74 and 0.8 were able to recover the ozone concentration, respectively. It was also found that among different pollutants, nitrogen monoxide, nitrogen dioxide and nitrox have the greatest impact on ozone concentration, just as temperature, humidity and wind speed are the most influential among meteorological data. Although the use of WTEST statistics led to the identification and elimination of inconsistencies and errors in the observations of pollution measurement stations, the neural network learning method showed better performance in modeling than multivariate regression due to its less sensitivity to noise. As a notable result, adding the LST product to the input data brought a 5% increase in accuracy in estimating ozone concentration.
Mehdi Teimouri; Omid Asadi Nalivan
Abstract
The purpose of this research is to determine the groundwater potential of areas and to prioritize the factors affecting it using the maximum entropy method and the new height above the nearest drainage index. In the present study, 14 effective indicators have been used for groundwater potential including ...
Read More
The purpose of this research is to determine the groundwater potential of areas and to prioritize the factors affecting it using the maximum entropy method and the new height above the nearest drainage index. In the present study, 14 effective indicators have been used for groundwater potential including topographic, geological, climatic, hydrological and land use factors as well as a new height above the nearest drainage index (HAND). First, the factors were divided into two parts including of topographic and other factors such as geology, climate, hydrology and land use, and groundwater potential map was prepared based on them. Then, with the integration of the indices, the final groundwater potential map was prepared. Of the 2547 springs, using the Mahalanobis distance method, 60% of the data was classified as test data and 40% as validation data. The results showed that according to topographic indices, other factors and the combination of all indicators, 38.5, 27.4 and 34.7 percent of the area have groundwater potential, respectively. Also based on Jackknife Diagram, the altitude, land use, slope, relative slope, HAND index and lithology were the most important factors influencing groundwater potential. The area under curvature (AUC) based on ROC diagram indicates accuracy of 83, 83 and 85% (very good) at the training stage and 82, 81 and 84% (very good) in the validation step based on topographic indicators, other factors and the integration of all indicators. Given that the HAND index has been an effective factor of groundwater, it has a crucial role in identifying areas with groundwater potential.
Naser Farajzadeh; Mehdi Hashemzadeh
Abstract
Generally, the photos captured by drones and satellites include both natural scenes and man-made objects. Having these two categories classified, we will be able to extract important information from aerial scenes such as the shapes and the alignments of the structures and then, create labeled aerial ...
Read More
Generally, the photos captured by drones and satellites include both natural scenes and man-made objects. Having these two categories classified, we will be able to extract important information from aerial scenes such as the shapes and the alignments of the structures and then, create labeled aerial images accordingly. Obtaining such information is of great interest in, for example, military, urban, and environmental protection applications. However, due to a huge amount of data that is collected in form of images, it seems that manually processing of such data is impossible. Therefore, employing automatic techniques based on artificial intelligence has become more on demand. There are numerous researches on this topic from which detection of buildings, vehicles, roads, and vegetation are of more interest. In this paper, we aim to introduce a method to detect man-made objects in aerial images based on a new set of color statistical features, which can be easily extracted, together with a learning model. Experimental results on a publicly available dataset, Massachusetts dataset, have shown promising results in terms of both accuracy and processing time; the accuracy and the average processing time are 90.07% and 0.96 seconds, respectively.
nima farhadi; Abas Kiani; Hamid Ebadi
Abstract
Object detection is one of the fundamental issues in image interpretation process, especially from remote-sensing imagery. One of the most effective and efficient methods in this field is the use of deep learning algorithm for feature extraction and interpretation. An object is a collection of unique ...
Read More
Object detection is one of the fundamental issues in image interpretation process, especially from remote-sensing imagery. One of the most effective and efficient methods in this field is the use of deep learning algorithm for feature extraction and interpretation. An object is a collection of unique patterns that differ with own adjacent properties. This difference usually occurs in one or more features simultaneously, which can be indicated by the difference in shape, color, and gray values. In this regard, the use of deep learning as an efficient branch of machine learning can be useful in generating high-level concepts through learning in different layers. In this research, a database based on the environmental and geographical conditions from some Iranian airports was created. Additionally, an optimal learner model was developed with a convolutional neural network. For this purpose, in the raw data processing section, besides using the transfer learning method, some vectors were extracted to classify the objects and delivered to an SVM model. The output values were compared with the values obtained from the test image for each object, and they were analyzed in a repeatable process for structural matching. Precision of 98.21% and F1-Measure of 99.1% was achieved, for identification of the target objects
Seyed Yousef Sajjadi; Saeid Parsian
Volume 10, Issue 2 , September 2018, , Pages 1-14
Abstract
In this study, the fusion of hyperspectral and LiDAR data was used to propose a new method to detectbuildings using the machine learning algorithm. The data sets provided by the National ScienceFoundation (NSF) - funded by Centre for Airborne Laser Mapping (NCALM)- over the University ofHouston campus ...
Read More
In this study, the fusion of hyperspectral and LiDAR data was used to propose a new method to detectbuildings using the machine learning algorithm. The data sets provided by the National ScienceFoundation (NSF) - funded by Centre for Airborne Laser Mapping (NCALM)- over the University ofHouston campus and the neighboring urban area, were used. The objectives of this study were: 1)automatic buildings extracting using the hyperspectral and LiDAR fused data (automation), 2)detecting of the maximum number of listed buildings on the study area (completeness), and 3)achieving the high accuracy in building detection throughout the classification procedure (accuracyand precision). After classification of the buildings, a comparison was made between the resultsobtained by the proposed method and the reference method in this field. Our proposed methodshowed a better accuracy for buildings detection in a much shorter time compared to the referencemethod. The accuracy of the classification was assessed by four parameters of Precision,Completeness, Overall Accuracy and Kappa Coefficient, and the values of 96%, 100%, 99% and 0.94were obtained, respectively.
F Aghighi; O.M Ebadati; H Aghighi
Volume 9, Issue 2 , December 2017, , Pages 41-60
Abstract
Light Detection and Ranging (LiDAR) point cloud dataset and 3 dimensional (3-D) models have been extensively used for urban feature extraction, urban management, forestry management, managing urban green space, tourism management, robotics, and video and computer games' production. One of the main steps ...
Read More
Light Detection and Ranging (LiDAR) point cloud dataset and 3 dimensional (3-D) models have been extensively used for urban feature extraction, urban management, forestry management, managing urban green space, tourism management, robotics, and video and computer games' production. One of the main steps toward reaching accurate 3-D models is clustering and classification of LiDAR point clouds data. The main purpose of this research is to find out, particular machine learning techniques, which are promising for best learning and classification of LiDAR point cloud data in an urban area. Therefore, the performances of K-nearest neighbor (KNN), Decision Trees (D3), Artificial Neural Networks (ANN), Naive Bayes (NB), Support Vector Machine (SVM), and Markov Random Field (MRF) classifiers were evaluated on the LiDAR and aerial image dataset of Vaihingen, Germany, in the context of the "ISPRS Test Project on Urban Classification and 3D Building Reconstruction." In regard to the literature review, MRF model has not been used to classify LiDAR point cloud data in Iran. In this research, we utilized all the geometrical features, intensity values of LiDAR and aerial images as well as extracted eigenvalues based features to distinguish five urban object classes, including impervious surfaces, buildings, low vegetation, trees and cars. In order to compute eigenvalues using local point distribution, this paper introduces a new cubic structure, which has been not found in previous studies. The final results of 3D classification techniques in this research were 2D maps that evaluated by the benchmark ISPRS tests maps. The evaluation shows that the performance of MRF model with an overall accuracy of 88.08% and the kappa value of 0.83 is higher than other techniques to classify the employed LiDAR point clouds.