Orthogonal polynomials and their moments have significant role in image processing and computer vision field. One of the polynomials is discrete Hahn polynomials (DHaPs), which are used for compression, and feature extraction. However, when the moment order becomes high, they suffer from numerical instability. This paper proposes a fast approach for computing the high orders DHaPs. This work takes advantage of the multithread for the calculation of Hahn polynomials coefficients. To take advantage of the available processing capabilities, independent calculations are divided among threads. The research provides a distribution method to achieve a more balanced processing burden among the threads. The proposed methods are tested for various values of DHaPs parameters, sizes, and different values of threads. In comparison to the unthreaded situation, the results demonstrate an improvement in the processing time which increases as the polynomial size increases, reaching its maximum of 5.8 in the case of polynomial size and order of 8000 × 8000 (matrix size). Furthermore, the trend of continuously raising the number of threads to enhance performance is inconsistent and becomes invalid at some point when the performance improvement falls below the maximum. The number of threads that achieve the highest improvement differs according to the size, being in the range of 8 to 16 threads in 1000 × 1000 matrix size, whereas at 8000 × 8000 case it ranges from 32 to 160 threads.
In recent years, with the rapid development of the current classification system in digital content identification, automatic classification of images has become the most challenging task in the field of computer vision. As can be seen, vision is quite challenging for a system to automatically understand and analyze images, as compared to the vision of humans. Some research papers have been done to address the issue in the low-level current classification system, but the output was restricted only to basic image features. However, similarly, the approaches fail to accurately classify images. For the results expected in this field, such as computer vision, this study proposes a deep learning approach that utilizes a deep learning algorithm.
... Show MoreFractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually.
Fractal image compression depends on representing an image using affine transformations. The main concern for researches in the discipline of fractal image compression (FIC) algorithm is to decrease encoding time needed to compress image data. The basic technique is that each portion of the image is similar to other portions of the same image. In this process, there are many models that were developed. The presence of fractals was initially noticed and handled using Iterated Function System (IFS); that is used for encoding images. In this paper, a review of fractal image compression is discussed with its variants along with other techniques. A summarized review of contributions is achieved to determine the fulfillment of fractal ima
... Show MoreThe research problem arose from the researchers’ sense of the importance of Digital Intelligence (DI), as it is a basic requirement to help students engage in the digital world and be disciplined in using technology and digital techniques, as students’ ideas are sufficiently susceptible to influence at this stage in light of modern technology. The research aims to determine the level of DI among university students using Artificial Intelligence (AI) techniques. To verify this, the researchers built a measure of DI. The measure in its final form consisted of (24) items distributed among (8) main skills, and the validity and reliability of the tool were confirmed. It was applied to a sample of 139 male and female students who were chosen
... Show MoreThe sensitive and important data are increased in the last decades rapidly, since the tremendous updating of networking infrastructure and communications. to secure this data becomes necessary with increasing volume of it, to satisfy securing for data, using different cipher techniques and methods to ensure goals of security that are integrity, confidentiality, and availability. This paper presented a proposed hybrid text cryptography method to encrypt a sensitive data by using different encryption algorithms such as: Caesar, Vigenère, Affine, and multiplicative. Using this hybrid text cryptography method aims to make the encryption process more secure and effective. The hybrid text cryptography method depends on circular queue. Using circ
... Show MoreThis study was aimed to measure marketing efficiency and study important factors affecting , using TOBIT qualitative response model for wheat crop in Salahalddin province. Results revealed that independent factors such as (marketing type, crops duration in the field, average marketing cost, distance between farm and marketing center, and average productivity) had an impact on wheat marketing efficiency. This impact varied in size and direction due to value of parameters. Values of marketing efficiency fluctuated within cities and towns in the province. The average value on the province level was 76.75%. This study was recommended developing marketing infrastructures which is essential to efficiency increases. In addition, it is impo
... Show MoreAccurate emotion categorization is an important and challenging task in computer vision and image processing fields. Facial emotion recognition system implies three important stages: Prep-processing and face area allocation, feature extraction and classification. In this study a new system based on geometric features (distances and angles) set derived from the basic facial components such as eyes, eyebrows and mouth using analytical geometry calculations. For classification stage feed forward neural network classifier is used. For evaluation purpose the Standard database "JAFFE" have been used as test material; it holds face samples for seven basic emotions. The results of conducted tests indicate that the use of suggested distances, angles
... Show MoreThe current standard for treating pilonidal sinus (PNS) is surgical intervention with excision of the sinus. Recurrence of PNS can be controlled with good hygiene and regular shaving of the natal cleft, laser treatment is a useful adjunct to prevent recurrence. Carbon dioxide (CO2) laser is a gold standard of soft tissue surgical laser due to its wavelength (10600 nm) thin depth (0.03mm) and collateral thermal zone (150mic).It effectively seals blood vessels, lymphatic, and nerve endings, Moreover wound is rendered sterile by effect of laser. Aim of this study was to apply and assess the clinical usefulness of CO2 10600nm laser in pilonidal sinus excision and decrease chance of recurrence. Design: For 10 patients, between 18 and 39 year
... Show More