The Braille Recognition System is the process of capturing a Braille document image and turning its content into its equivalent natural language characters. The Braille Recognition System's cell transcription and Braille cell recognition are the two basic phases that follow one another. The Braille Recognition System is a technique for locating and recognizing a Braille document stored as an image, such as a jpeg, jpg, tiff, or gif image, and converting the text into a machine-readable format, such as a text file. BCR translates an image's pixel representation into its character representation. As workers at visually impaired schools and institutes, we profit from Braille recognition in a variety of ways. The Braille Recognition System contains many stages, including image acquisition, pre-processing of images, and character recognition. This review aims to examine the earlier studies on transcription and Braille cell recognition by other scholars and the comparative results and detection techniques among them. This review will look at previous work done by other researchers on Braille cell recognition and transcription, comparing previous works in this study, and will be useful and illuminating for Braille Recognition System researchers, especially newcomers.
The field of Optical Character Recognition (OCR) is the process of converting an image of text into a machine-readable text format. The classification of Arabic manuscripts in general is part of this field. In recent years, the processing of Arabian image databases by deep learning architectures has experienced a remarkable development. However, this remains insufficient to satisfy the enormous wealth of Arabic manuscripts. In this research, a deep learning architecture is used to address the issue of classifying Arabic letters written by hand. The method based on a convolutional neural network (CNN) architecture as a self-extractor and classifier. Considering the nature of the dataset images (binary images), the contours of the alphabet
... Show MoreThe purpose of this study was to find out the connection between the water parameters that were examined in the laboratory and the water index acquired from the examination of the satellite image of the study area. This was accomplished by analysing the Landsat-8 satellite picture results as well as the geographic information system (GIS). The primary goal of this study is to develop a model for the chemical and physical characteristics of the Al-Abbasia River in Al-Najaf Al-Ashraf Governorate. The water parameters employed in this investigation are as follows: (PH, EC, TDS, TSS, Na, Mg, K, SO4, Cl, and NO3). To collect the samples, ten sampling locations were identified, and the satellite image was obtained on the
... Show MoreSkull image separation is one of the initial procedures used to detect brain abnormalities. In an MRI image of the brain, this process involves distinguishing the tissue that makes up the brain from the tissue that does not make up the brain. Even for experienced radiologists, separating the brain from the skull is a difficult task, and the accuracy of the results can vary quite a little from one individual to the next. Therefore, skull stripping in brain magnetic resonance volume has become increasingly popular due to the requirement for a dependable, accurate, and thorough method for processing brain datasets. Furthermore, skull stripping must be performed accurately for neuroimaging diagnostic systems since neither no
... Show MoreNations are developed with education and knowledge that raise the status of society in its various segments, beyond that it leads to underdevelopment and deterioration in various sectors, whether economic, health, social, etc. If we considered the general name of The ministry of Education & Scientific Studies, then the second part seems to be not functioning, since scientific research has no material allocation and remains based on the material potential of the university professor. As for the first half of the topic, the reality of the situation reveals problems related to the Holy Trinity of Education which is (Professor - Student - the scientific method) where universities suffer at the present time from this problem, and
... Show MoreThis research work aims to the determination of molybdenum (VI) ion via the formation of peroxy molybdenum compounds which has red-brown colour with absorbance wave length at 455nm for the system of ammonia solution-hydrogen peroxide-molybdenum (VI) using a completely newly developed microphotometer based on the ON-Line measurement. Variation of responses expressed in millivolt. A correlation coefficient of 0.9925 for the range of 2.5-150 ?g.ml-1 with percentage linearity of 98.50%. A detection limit of 0.25 ?g.ml-1 was obtained. All physical and chemical variable were optimized interferences of cation and anion were studied classical method of measurement were done and compared well with newly on-line measurements. Application for the use
... Show MoreThere is a great deal of systems dealing with image processing that are being used and developed on a daily basis. Those systems need the deployment of some basic operations such as detecting the Regions of Interest and matching those regions, in addition to the description of their properties. Those operations play a significant role in decision making which is necessary for the next operations depending on the assigned task. In order to accomplish those tasks, various algorithms have been introduced throughout years. One of the most popular algorithms is the Scale Invariant Feature Transform (SIFT). The efficiency of this algorithm is its performance in the process of detection and property description, and that is due to the fact that
... Show MoreThis paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one
... Show MoreThe Internet image retrieval is an interesting task that needs efforts from image processing and relationship structure analysis. In this paper, has been proposed compressed method when you need to send more than a photo via the internet based on image retrieval. First, face detection is implemented based on local binary patterns. The background is notice based on matching global self-similarities and compared it with the rest of the image backgrounds. The propose algorithm are link the gap between the present image indexing technology, developed in the pixel domain, and the fact that an increasing number of images stored on the computer are previously compressed by JPEG at the source. The similar images are found and send a few images inst
... Show More
Ground Penetrating Radar (GPR) is a nondestructive geophysical technique that uses electromagnetic waves to evaluate subsurface information. A GPR unit emits a short pulse of electromagnetic energy and is able to determine the presence or absence of a target by examining the reflected energy from that pulse. GPR is geophysical approach that use band of the radio spectrum. In this research the function of GPR has been summarized as survey different buried objects such as (Iron, Plastic(PVC), Aluminum) in specified depth about (0.5m) using antenna of 250 MHZ, the response of the each object can be recognized as its shapes, this recognition have been performed using image processi |
Enhancing quality image fusion was proposed using new algorithms in auto-focus image fusion. The first algorithm is based on determining the standard deviation to combine two images. The second algorithm concentrates on the contrast at edge points and correlation method as the criteria parameter for the resulted image quality. This algorithm considers three blocks with different sizes at the homogenous region and moves it 10 pixels within the same homogenous region. These blocks examine the statistical properties of the block and decide automatically the next step. The resulted combined image is better in the contras
... Show More