The Enhanced Thematic Mapper Plus (ETM+) that loaded onboard the Landsat-7 satellite was launched on 15 April 1999. After 4 years, the image collected by this sensor was greatly impacted by the failure of the system’s Scan Line Corrector (SLC), a radiometry error.The median filter is one of the basic building blocks in many image processing situations. Digital images are often distorted by impulse noise due to errors generated by the noise sensor, errors that occur during the conversion of signals from analog-to-digital, as well as errors generated in communication channels. This error inevitably leads to a change in the intensity of some pixels, while some pixels remain unchanged. To remove impulse noise and improve the quality of the image we are working on. In this paper, the Landsat -7 data was corrected from line droop out radiometric errors using the median filter method. we studied the median filter and offer a method based on an improved median filtering algorithm, [2]. We apply the median filter (3 x 3) to correct the image taken by of Landsat 7 and correct it, and we will restore the damaged pixels using the Erdas imagine program.
The world is keeping pace with evolution in all its fields as a result of scientists' pursuit of continuous scientific and technological development. This evolution included the sports field, which had a large space in the aspect of development and for all disciplines, Therefore, it's reflected today in what we see of records and advanced achievements in sporting events and activities. The development in the field of sports was the result of scientific research (Hussein and Jawad., 2022), where the interest in the training process has become one of the most important pillars of the development of achievement (Neamah and Altay., 2020). The shooting sport has also witnessed a remarkable development due to the diversity and development of its
... Show MoreCompressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension) on speech compression. DWT and MCT performances in terms of comp
... Show MoreAs we live in the era of the fourth technological revolution, it has become necessary to use artificial intelligence to generate electric power through sustainable solar energy, especially in Iraq and what it has gone through in terms of crises and what it suffers from a severe shortage of electric power because of the wars and calamities it went through. During that period of time, its impact is still evident in all aspects of daily life experienced by Iraqis because of the remnants of wars, siege, terrorism, wrong policies ruling before and later, regional interventions and their consequences, such as the destruction of electric power stations and the population increase, which must be followed by an increase in electric power stations,
... Show MoreEye Detection is used in many applications like pattern recognition, biometric, surveillance system and many other systems. In this paper, a new method is presented to detect and extract the overall shape of one eye from image depending on two principles Helmholtz & Gestalt. According to the principle of perception by Helmholz, any observed geometric shape is perceptually "meaningful" if its repetition number is very small in image with random distribution. To achieve this goal, Gestalt Principle states that humans see things either through grouping its similar elements or recognize patterns. In general, according to Gestalt Principle, humans see things through genera
... Show MoreNeural cryptography deals with the problem of “key exchange” between two neural networks by using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between two communicating parties ar eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process.
The penalized least square method is a popular method to deal with high dimensional data ,where the number of explanatory variables is large than the sample size . The properties of penalized least square method are given high prediction accuracy and making estimation and variables selection
At once. The penalized least square method gives a sparse model ,that meaning a model with small variables so that can be interpreted easily .The penalized least square is not robust ,that means very sensitive to the presence of outlying observation , to deal with this problem, we can used a robust loss function to get the robust penalized least square method ,and get robust penalized estimator and
... Show MoreWater/oil emulsion is considered as the most refractory mixture to separate because of the interference of the two immiscible liquids, water and oil. This research presents a study of dewatering of water / kerosene emulsion using hydrocyclone. The effects of factors such as: feed flow rate (3, 5, 7, 9, and 11 L/min), inlet water concentration of the emulsion (5%, 7.5%, 10%, 12.5%, and 15% by volume), and split ratio (0.1, 0.3, 0.5, 0.7, and 0.9) on the separation efficiency and pressure drop were studied. Dimensional analysis using Pi theorem was applied for the first time to model the hydrocyclone based on the experimental data. It was shown that the maximum separation efficiency; at split ratio 0.1, was 94.3% at 10% co
... Show MoreDiabetes is one of the increasing chronic diseases, affecting millions of people around the earth. Diabetes diagnosis, its prediction, proper cure, and management are compulsory. Machine learning-based prediction techniques for diabetes data analysis can help in the early detection and prediction of the disease and its consequences such as hypo/hyperglycemia. In this paper, we explored the diabetes dataset collected from the medical records of one thousand Iraqi patients. We applied three classifiers, the multilayer perceptron, the KNN and the Random Forest. We involved two experiments: the first experiment used all 12 features of the dataset. The Random Forest outperforms others with 98.8% accuracy. The second experiment used only five att
... Show MoreProvides the style of benchmarking the best possible use whenevaluating the performance and evaluation, as well as improved performance,due to its consistency with the principles of good evaluation of theperformance, an extension of the completion of several functions of the timeand cost less, thereby increasing the efficiency of the management of theinstitutions, especially institutions, the media, as it became public the future ofthe message sender to the same time Zaorosaúl new media is challenging thetraditional media of what distinguishes this new interactive media and mass ledto this transition . However, the media Aljdidhoosaúl traditional mediacontinue to coexist and reinforce each Menhmaalakhr, for his wealth offreedom of opin
... Show More