In this research, a new technique is suggested to reduce the long time required by the encoding process by using modified moment features on domain blocks. The modified moment features were used in accelerating the matching step of the Iterated Function System (IFS). The main disadvantage facing the fractal image compression (FIC) method is the over-long encoding time needed for checking all domain blocks and choosing the least error to get the best matched domain for each block of ranges. In this paper, we develop a method that can reduce the encoding time of FIC by reducing the size of the domain pool based on the moment features of domain blocks, followed by a comparison with threshold (the selected threshold based on experience is 0.0001). The experiment was conducted on three images with size of 512x512 pixel, resolution of 8 bits/pixel, and different block size (4x4, 8x8 and, 16x16 pixels). The resulted encoding time (ET) values achieved by the proposed method were 41.53, 39.06, and 38.16 sec, respectively, for boat , butterfly, and house images of block size 4x4 pixel. These values were compared with those obtained by the traditional algorithm for the same images with the same block size, which were 1073.85, 1102.66, and 1084.92 sec, respectively. The results imply that the proposed algorithm could remarkably reduce the ET of the images in comparison with the traditional algorithm.
As we live in the era of the fourth technological revolution, it has become necessary to use artificial intelligence to generate electric power through sustainable solar energy, especially in Iraq and what it has gone through in terms of crises and what it suffers from a severe shortage of electric power because of the wars and calamities it went through. During that period of time, its impact is still evident in all aspects of daily life experienced by Iraqis because of the remnants of wars, siege, terrorism, wrong policies ruling before and later, regional interventions and their consequences, such as the destruction of electric power stations and the population increase, which must be followed by an increase in electric power stations,
... Show MoreThe penalized least square method is a popular method to deal with high dimensional data ,where the number of explanatory variables is large than the sample size . The properties of penalized least square method are given high prediction accuracy and making estimation and variables selection
At once. The penalized least square method gives a sparse model ,that meaning a model with small variables so that can be interpreted easily .The penalized least square is not robust ,that means very sensitive to the presence of outlying observation , to deal with this problem, we can used a robust loss function to get the robust penalized least square method ,and get robust penalized estimator and
... Show MoreRecent developments in technology and the area of digital communications have rendered digital images increasingly susceptible to tampering and alteration by persons who are not authorized to do so. This may appear to be acceptable, especially if an image editing process is necessary to delete or add a particular scene that improves the quality the image. But what about images used in authorized governmental transactions? The consequences can be severe; any altered document is considered forged under the law and may cause confusion. Also, any document that cannot be verified as being authentic is regarded as a fake and cannot be used, inflicting harm on people. The suggested work intends to reduce fraud in electronic documents u
... Show MoreThis article deals with estimations of system Reliability for one component, two and s-out-of-k stress-strength system models with non-identical component strengths which are subjected to a common stress, using Exponentiated Exponential distribution with common scale parameter. Based on simulation, comparison studies are made between the ML, PC and LS estimators of these system reliabilities when scale parameter is known.
Segmented regression consists of several sections separated by different points of membership, showing the heterogeneity arising from the process of separating the segments within the research sample. This research is concerned with estimating the location of the change point between segments and estimating model parameters, and proposing a robust estimation method and compare it with some other methods that used in the segmented regression. One of the traditional methods (Muggeo method) has been used to find the maximum likelihood estimator in an iterative approach for the model and the change point as well. Moreover, a robust estimation method (IRW method) has used which depends on the use of the robust M-estimator technique in
... Show MoreThe estimation of rock petrophysical parameters is an essential matter to characterize any reservoir. This research deals with the evaluation of effective porosity (Pe), shale volume (Vsh) and water saturation (Sw) of reservoirs at Kumait and Dujalia fields, which were analyzed from well log and seismic data. The absolute acoustic impedance (AI) and relative acoustic impedance (RAI) were derived from a model which is based on the inversion of seismic 3-D post-stack data. NahrUmr formation’s sand reservoirs are identified by the RAI section of the study area. Nahr Umr sand-2 unit in Kumait field is the main reservoir; its delineation depends on the available well logs and AI sections information. The results of well logging i
... Show MoreObjective: This study aimed to assessing new suggested technique of Physical Growth Curves (PGC) charts in
children under two years old of a non-probability sample.
Methodology: A non-probability sample of size (420) children under two years selected from 12 Primary
Health Care Centers in Diyala governorate during the period from 15th Nov. 2010 to 13th Mar. 2011
according to admix of a different properties together in one chart/or growth curve chart included in at least
weight, Height, and Head circumference.
Results: the results showed different properties that can be admix together in one chart/or growth curve
chart included in at least weight, Height, and Head circumference. And to overtake the problem of the norm
The Elliptic Curve Cryptography (ECC) algorithm meets the requirements for multimedia encryption since the encipher operation of the ECC algorithm is applied at points only and that offer significant computational advantages. The encoding/decoding operations for converting the text message into points on the curve and vice versa are not always considered a simple process. In this paper, a new mapping method has been investigated for converting the text message into a point on the curve or point to a text message in an efficient and secure manner; it depends on the repeated values in coordinate to establish a lookup table for encoding/ decoding operations. The proposed method for mapping process is&
... Show MoreThe research aims to evaluate the suppliers at Diyala general electric industries company conducted in an environment of uncertainty and fuzzy where there is no particular system followed by the company, and also aims to use the problem of traveling salesman problem in the process of transporting raw materials from suppliers to the company in a fuzzy environment. Therefore, a system based on mathematical methods and quantity was developed to evaluate the suppliers. Fuzzy inference system (FIS) and fuzzy set theory were used to solve this problem through (Matlab) and the problem of the traveling salesman in two stages was also solved by the first stage of eliminating the fuzzing of the environment using the rank function method, w
... Show MoreThe aim of this paper is to design fast neural networks to approximate periodic functions, that is, design a fully connected networks contains links between all nodes in adjacent layers which can speed up the approximation times, reduce approximation failures, and increase possibility of obtaining the globally optimal approximation. We training suggested network by Levenberg-Marquardt training algorithm then speeding suggested networks by choosing most activation function (transfer function) which having a very fast convergence rate for reasonable size networks. In all algorithms, the gradient of the performance function (energy function) is used to determine how to
... Show More