Digital Elevation Model (DEM) is one of the developed techniques for relief representation. The definition of a DEM construction is the modeling technique of earth surface from existing data. DEM plays a role as one of the fundamental information requirement that has been generally utilized in GIS data structures. The main aim of this research is to present a methodology for assessing DEMs generation methods. The DEMs data will be extracted from open source data e.g. Google Earth. The tested data will be compared with data produced from formal institutions such as General Directorate of Surveying. The study area has been chosen in south of Iraq (Al-Gharraf / Dhi Qar governorate. The methods of DEMs creation are kriging, IDW (inverse distance weight), spline, and natural neighbor. This research used different software for processing and analysis such as ArcGIS 10.2, TCX and Civil 3D. Two- sample t-test has been adopted to investigate the mean of elevation differences between compared datasets. The results showed that the spline is the best method that can be used to build DEM in this study area.
For businesses that provide delivery services, the efficiency of the delivery process in terms of punctuality is very important. In addition to increasing customer trust, efficient route management, and selection are required to reduce vehicle fuel costs and expedite delivery. Some small and medium businesses still use conventional methods to manage delivery routes. Decisions to manage delivery schedules and routes do not use any specific methods to expedite the delivery settlement process. This process is inefficient, takes a long time, increases costs and is prone to errors. Therefore, the Dijkstra algorithm has been used to improve the delivery management process. A delivery management system was developed to help managers and drivers
... Show MoreDust is a frequent contributor to health risks and changes in the climate, one of the most dangerous issues facing people today. Desertification, drought, agricultural practices, and sand and dust storms from neighboring regions bring on this issue. Deep learning (DL) long short-term memory (LSTM) based regression was a proposed solution to increase the forecasting accuracy of dust and monitoring. The proposed system has two parts to detect and monitor the dust; at the first step, the LSTM and dense layers are used to build a system using to detect the dust, while at the second step, the proposed Wireless Sensor Networks (WSN) and Internet of Things (IoT) model is used as a forecasting and monitoring model. The experiment DL system
... Show MoreEmotion recognition has important applications in human-computer interaction. Various sources such as facial expressions and speech have been considered for interpreting human emotions. The aim of this paper is to develop an emotion recognition system from facial expressions and speech using a hybrid of machine-learning algorithms in order to enhance the overall performance of human computer communication. For facial emotion recognition, a deep convolutional neural network is used for feature extraction and classification, whereas for speech emotion recognition, the zero-crossing rate, mean, standard deviation and mel frequency cepstral coefficient features are extracted. The extracted features are then fed to a random forest classifier. In
... Show MoreSkull image separation is one of the initial procedures used to detect brain abnormalities. In an MRI image of the brain, this process involves distinguishing the tissue that makes up the brain from the tissue that does not make up the brain. Even for experienced radiologists, separating the brain from the skull is a difficult task, and the accuracy of the results can vary quite a little from one individual to the next. Therefore, skull stripping in brain magnetic resonance volume has become increasingly popular due to the requirement for a dependable, accurate, and thorough method for processing brain datasets. Furthermore, skull stripping must be performed accurately for neuroimaging diagnostic systems since neither no
... Show MoreIn cyber security, the most crucial subject in information security is user authentication. Robust text-based password methods may offer a certain level of protection. Strong passwords are hard to remember, though, so people who use them frequently write them on paper or store them in file for computer .Numerous of computer systems, networks, and Internet-based environments have experimented with using graphical authentication techniques for user authentication in recent years. The two main characteristics of all graphical passwords are their security and usability. Regretfully, none of these methods could adequately address both of these factors concurrently. The ISO usability standards and associated characteristics for graphical
... Show MoreCost is the essence of any production process for it is one of the requirements for the continuity of activities so as to increase the profitability of the economic unit and to support the competitive situation in the market. Therefore, there should be an overall control to reduce the cost without compromising the product quality; to achieve this, the management should have detailed credible and reliable information about the cost to be measured, collected, understood and to analyze the causes for the spread of deviations and obstacles the management faces, and to search for the factors that trigger the emergence of these deviations and obstacles
This research describes a new model inspired by Mobilenetv2 that was trained on a very diverse dataset. The goal is to enable fire detection in open areas to replace physical sensor-based fire detectors and reduce false alarms of fires, to achieve the lowest losses in open areas via deep learning. A diverse fire dataset was created that combines images and videos from several sources. In addition, another self-made data set was taken from the farms of the holy shrine of Al-Hussainiya in the city of Karbala. After that, the model was trained with the collected dataset. The test accuracy of the fire dataset that was trained with the new model reached 98.87%.
Targeted current research study of the relationship between guilt and self-consciousness and consisted of the research community of students from the open educational college, as has been selected students in the Department of Counseling and psychological science department and the researcher used guilt, prepared Scale (Ansari, 2003), and the measure of selfawareness prepared (Shammari 0.2000), and extracted his Alsekoumtria characteristics, Fastkhrjt alternatives after a presentation to a group of experts and specialists in the field of psychological counseling psychology, education, science and psychological validity and reliability Alvakronbach manner and retesting reaching reliability coefficient of guilt ((0.85) and awarenes
... Show MoreIn this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.