Optical Mark Recognition (OMR) is the technology of electronically extracting intended data from marked fields, such as squareand bubbles fields, on printed forms. OMR technology is particularly useful for applications in which large numbers of hand-filled forms need to be processed quickly and with a great degree of accuracy. The technique is particularly popular with schools and universities for the reading in of multiple choice exam papers. This paper proposed OMRbased on Modify Multi-Connect Architecture (MMCA) associative memory, its work in two phases: training phase and recognition phase. The proposed method was also able to detect more than one or no selected choice. Among 800 test samples with 8 types of grid answer sheets and total 58000 questions, the system exhibits an accuracy is 99.96% in the recognition of marked, thus making it suitable for real world applications.
Most systems are intelligent and the industrial world is moving now towards
technology. Most industrial systems are now computerized and offer a high speed.
However, Face recognition is a biometric system that can identify people from their
faces. For few number of people to be identified, it can be considered as a fast
system. When the number of people grew to be bigger, the system cannot be adopted
in a real-time application because its speed will degrade along with its accuracy.
However, the accuracy can be enhanced using pre-processing techniques but the
time delay is still a challenge. A series of experiments had been done on AT&TORL
database images using Enhanced Face Recognition System (EFRS) that is
Automatic document summarization technology is evolving and may offer a solution to the problem of information overload. Multi-document summarization is an optimization problem demanding optimizing more than one objective function concurrently. The proposed work considers a balance of two significant objectives: content coverage and diversity while generating a summary from a collection of text documents. Despite the large efforts introduced from several researchers for designing and evaluating performance of many text summarization techniques, their formulations lack the introduction of any model that can give an explicit representation of – coverage and diversity – the two contradictory semantics of any summary. The design of gener
... Show MoreHM Al-Dabbas, RA Azeez, AE Ali, IRAQI JOURNAL OF COMPUTERS, COMMUNICATIONS, CONTROL AND SYSTEMS ENGINEERING, 2023
Image content verification is to confirm the validity of the images, i.e. . To test if the image has experienced any alteration since it was made. Computerized watermarking has turned into a promising procedure for image content verification in light of its exceptional execution and capacity of altering identification.
In this study, a new scheme for image verification reliant on two dimensional chaotic maps and Discrete Wavelet Transform (DWT) is introduced. Arnold transforms is first applied to Host image (H) for scrambling as a pretreatment stage, then the scrambled host image is partitioned into sub-blocks of size 2×2 in which a 2D DWT is utilized on ea
... Show More<p>In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on
... Show MoreThe art of preventing the detection of hidden information messages is the way that steganography work. Several algorithms have been proposed for steganographic techniques. A major portion of these algorithms is specified for image steganography because the image has a high level of redundancy. This paper proposed an image steganography technique using a dynamic threshold produced by the discrete cosine coefficient. After dividing the green and blue channel of the cover image into 1*3-pixel blocks, check if any bits of green channel block less or equal to threshold then start to store the secret bits in blue channel block, and to increase the security not all bits in the chosen block used to store the secret bits. Firstly, store in the cente
... Show MoreColor image compression is a good way to encode digital images by decreasing the number of bits wanted to supply the image. The main objective is to reduce storage space, reduce transportation costs and maintain good quality. In current research work, a simple effective methodology is proposed for the purpose of compressing color art digital images and obtaining a low bit rate by compressing the matrix resulting from the scalar quantization process (reducing the number of bits from 24 to 8 bits) using displacement coding and then compressing the remainder using the Mabel ZF algorithm Welch LZW. The proposed methodology maintains the quality of the reconstructed image. Macroscopic and
This paper proposes a completion that can allow fracturing four zones in a single trip in the well called “Y” (for confidential reasons) of the field named “X” (for confidential reasons). The steps to design a well completion for multiple fracturing are first to select the best completion method then the required equipment and the materials that it is made of. After that, the completion schematic must be drawn by using Power Draw in this case, and the summary installation procedures explained. The data used to design the completion are the well trajectory, the reservoir data (including temperature, pressure and fluid properties), the production and injection strategy. The results suggest that multi-stage hydraulic fracturing can
... Show MoreThe conventional procedures of clustering algorithms are incapable of overcoming the difficulty of managing and analyzing the rapid growth of generated data from different sources. Using the concept of parallel clustering is one of the robust solutions to this problem. Apache Hadoop architecture is one of the assortment ecosystems that provide the capability to store and process the data in a distributed and parallel fashion. In this paper, a parallel model is designed to process the k-means clustering algorithm in the Apache Hadoop ecosystem by connecting three nodes, one is for server (name) nodes and the other two are for clients (data) nodes. The aim is to speed up the time of managing the massive sc
... Show More