بهذا البحث نقارن معاييرالمعلومات التقليدية (AIC , SIC, HQ , FPE ) مع معيارمعلومات الانحراف المحور (MDIC) المستعملة لتحديد رتبة انموذج الانحدارالذاتي (AR) للعملية التي تولد البيانات,باستعمال المحاكاة وذلك بتوليد بيانات من عدة نماذج للأنحدارالذاتي,عندما خضوع حد الخطأ للتوزيع الطبيعي بقيم مختلفة لمعلماته
... Show MoreAbstract
This study aimed to identify the business risks using the approach of the client strategy analysis in order to improve the efficiency and effectiveness of the audit process. A study of business risks and their impact on the efficiency and effectiveness of the audit process has been performed to establish a cognitive framework of the main objective of this study, in which the descriptive analytical method has been adopted. A survey questionnaire has been developed and distributed to the targeted group of audit firms which have profession license from the Auditors Association in the Gaza Strip (63 offices). A hundred questionnaires have been distributed to the study sample of which, a total of 84 where answered and
... Show MoreGroundwater is an essential source because of its high quality and continuous availability characterize this water resource. Therefore, the study of groundwater has required more attention. The present study aims to assess and manage groundwater quality's suitability for various purposes through the Geographical Information System GIS and the Water Quality Index WQI. The study area is located in the city of Baghdad in central Iraq, with an approximate area of 900 , data were collected from the relevant official departments representing the locations of 97 wells of groundwater in the study area for the year 2019, as it included physicochemical parameters such as pH, EC, TDS, Na, K, Mg, Ca, Cl, , and &nbs
... Show MoreIn this paper, we will present proposed enhance process of image compression by using RLE algorithm. This proposed yield to decrease the size of compressing image, but the original method used primarily for compressing a binary images [1].Which will yield increasing the size of an original image mostly when used for color images. The test of an enhanced algorithm is performed on sample consists of ten BMP 24-bit true color images, building an application by using visual basic 6.0 to show the size after and before compression process and computing the compression ratio for RLE and for the enhanced RLE algorithm.
Image segmentation can be defined as a cutting or segmenting process of the digital image into many useful points which are called segmentation, that includes image elements contribute with certain attributes different form Pixel that constitute other parts. Two phases were followed in image processing by the researcher in this paper. At the beginning, pre-processing image on images was made before the segmentation process through statistical confidence intervals that can be used for estimate of unknown remarks suggested by Acho & Buenestado in 2018. Then, the second phase includes image segmentation process by using "Bernsen's Thresholding Technique" in the first phase. The researcher drew a conclusion that in case of utilizing
... Show MoreIn this research, an analysis for the standard Hueckel edge detection algorithm behaviour by using three dimensional representations for the edge goodness criterion is presents after applying it on a real high texture satellite image, where the edge goodness criterion is analysis statistically. The Hueckel edge detection algorithm showed a forward exponential relationship between the execution time with the used disk radius. Hueckel restrictions that mentioned in his papers are adopted in this research. A discussion for the resultant edge shape and malformation is presented, since this is the first practical study of applying Hueckel edge detection algorithm on a real high texture image containing ramp edges (satellite image).
Fractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually.
In this paper, we introduce a DCT based steganographic method for gray scale images. The embedding approach is designed to reach efficient tradeoff among the three conflicting goals; maximizing the amount of hidden message, minimizing distortion between the cover image and stego-image,and maximizing the robustness of embedding. The main idea of the method is to create a safe embedding area in the middle and high frequency region of the DCT domain using a magnitude modulation technique. The magnitude modulation is applied using uniform quantization with magnitude Adder/Subtractor modules. The conducted test results indicated that the proposed method satisfy high capacity, high preservation of perceptual and statistical properties of the steg
... Show MoreIn this paper, a method for hiding cipher text in an image file is introduced . The
proposed method is to hide the cipher text message in the frequency domain of the image.
This method contained two phases: the first is embedding phase and the second is extraction
phase. In the embedding phase the image is transformed from time domain to frequency
domain using discrete wavelet decomposition technique (Haar). The text message encrypted
using RSA algorithm; then Least Significant Bit (LSB) algorithm used to hide secret message
in high frequency. The proposed method is tested in different images and showed success in
hiding information according to the Peak Signal to Noise Ratio (PSNR) measure of the the
original ima