The main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256) in our research, compressed them by using MLP for each compression process the number of neurons in the hidden layer was changing and calculating the compression ratio, mean square error and peak signal-to-noise ratio to compare the results to get the value of original image. The findings of the research was the desired results as the compression ratio was less than five and a few mean square error thus a large value of peak signal-to-noise ratio had been recorded.
In this paper introduce some generalizations of some definitions which are, closure converge to a point, closure directed toward a set, almost ω-converges to a set, almost condensation point, a set ωH-closed relative, ω-continuous functions, weakly ω-continuous functions, ω-compact functions, ω-rigid a set, almost ω-closed functions and ω-perfect functions with several results concerning them.
In this research, the focus was on estimating the parameters on (min- Gumbel distribution), using the maximum likelihood method and the Bayes method. The genetic algorithmmethod was employed in estimating the parameters of the maximum likelihood method as well as the Bayes method. The comparison was made using the mean error squares (MSE), where the best estimator is the one who has the least mean squared error. It was noted that the best estimator was (BLG_GE).
Most recognition system of human facial emotions are assessed solely on accuracy, even if other performance criteria are also thought to be important in the evaluation process such as sensitivity, precision, F-measure, and G-mean. Moreover, the most common problem that must be resolved in face emotion recognition systems is the feature extraction methods, which is comparable to traditional manual feature extraction methods. This traditional method is not able to extract features efficiently. In other words, there are redundant amount of features which are considered not significant, which affect the classification performance. In this work, a new system to recognize human facial emotions from images is proposed. The HOG (Histograms of Or
... Show MoreThe aim of the study was to find out the correlations and impact between the variable of ethical leadership behavior and university performance at Sumer University. Use the descriptive analytical method by adopting the questionnaire tool to collect data. The questionnaire was distributed electronically to 113 teachers at Sumer University and the response was from 105 teachers. The research results showed that there is a correlation and effect relationship between the search variables. In addition, the responding university does not have ethically defined standards in terms of performance of the work of the cadres working there. Finally, the research presented a set of recommendations aimed at tackling problems in the ethical lead
... Show MoreIn this paper all possible regressions procedure as well as stepwise regression procedure were applied to select the best regression equation that explain the effect of human capital represented by different levels of human cadres on the productivity of the processing industries sector in Iraq by employing the data of a time series consisting of 21 years period. The statistical program SPSS was used to perform the required calculations.
In the present work, an image compression method have been modified by combining The Absolute Moment Block Truncation Coding algorithm (AMBTC) with a VQ-based image coding. At the beginning, the AMBTC algorithm based on Weber's law condition have been used to distinguish low and high detail blocks in the original image. The coder will transmit only mean of low detailed block (i.e. uniform blocks like background) on the channel instate of transmit the two reconstruction mean values and bit map for this block. While the high detail block is coded by the proposed fast encoding algorithm for vector quantized method based on the Triangular Inequality Theorem (TIE), then the coder will transmit the two reconstruction mean values (i.e. H&L)
... Show MoreSome maps of the chaotic firefly algorithm were selected to select variables for data on blood diseases and blood vessels obtained from Nasiriyah General Hospital where the data were tested and tracking the distribution of Gamma and it was concluded that a Chebyshevmap method is more efficient than a Sinusoidal map method through mean square error criterion.
In this paper, we develop the work of Ghawi on close dual Rickart modules and discuss y-closed dual Rickart modules with some properties. Then, we prove that, if are y-closed simple -modues and if -y-closed is a dual Rickart module, then either Hom ( ) =0 or . Also, we study the direct sum of y-closed dual Rickart modules.
المقدمة
تتعامل الجهات الضريبية في مختلف دول العالم بأساليب عديدة لجباية الضرائب من المكلفين بغض النظر عن فئات وأصناف هؤلاء المكلفين،وفي العراق تم اعتماد العديد من الأساليب لجباية الضرائب على امتداد المدد الزمنية المتعاقبة،وكان لأسلوب التقدير الذاتي وهو أحد تلك الأساليب مجالاً للتطبيق خلال مدة زمنية معينة،حيث جرى تطبيق هذا الأسلوب على وحدات اقتصادية معينة، وبالرغم من المساوئ التي قد ترافق تطبيق
... Show More