The Braille Recognition System is the process of capturing a Braille document image and turning its content into its equivalent natural language characters. The Braille Recognition System's cell transcription and Braille cell recognition are the two basic phases that follow one another. The Braille Recognition System is a technique for locating and recognizing a Braille document stored as an image, such as a jpeg, jpg, tiff, or gif image, and converting the text into a machine-readable format, such as a text file. BCR translates an image's pixel representation into its character representation. As workers at visually impaired schools and institutes, we profit from Braille recognition in a variety of ways. The Braille Recognition System contains many stages, including image acquisition, pre-processing of images, and character recognition. This review aims to examine the earlier studies on transcription and Braille cell recognition by other scholars and the comparative results and detection techniques among them. This review will look at previous work done by other researchers on Braille cell recognition and transcription, comparing previous works in this study, and will be useful and illuminating for Braille Recognition System researchers, especially newcomers.
Recently, the phenomenon of the spread of fake news or misinformation in most fields has taken on a wide resonance in societies. Combating this phenomenon and detecting misleading information manually is rather boring, takes a long time, and impractical. It is therefore necessary to rely on the fields of artificial intelligence to solve this problem. As such, this study aims to use deep learning techniques to detect Arabic fake news based on Arabic dataset called the AraNews dataset. This dataset contains news articles covering multiple fields such as politics, economy, culture, sports and others. A Hybrid Deep Neural Network has been proposed to improve accuracy. This network focuses on the properties of both the Text-Convolution Neural
... Show MoreThe convergence speed is the most important feature of Back-Propagation (BP) algorithm. A lot of improvements were proposed to this algorithm since its presentation, in order to speed up the convergence phase. In this paper, a new modified BP algorithm called Speeding up Back-Propagation Learning (SUBPL) algorithm is proposed and compared to the standard BP. Different data sets were implemented and experimented to verify the improvement in SUBPL.
The development of microcontroller is used in monitoring and data acquisition recently. This development has born various architectures for spreading and interfacing the microcontroller in network environment. Some of existing architecture suffers from redundant in resources, extra processing, high cost and delay in response. This paper presents flexible concise architecture for building distributed microcontroller networked system. The system consists of only one server, works through the internet, and a set of microcontrollers distributed in different sites. Each microcontroller is connected through the Ethernet to the internet. In this system the client requesting data from certain side is accomplished through just one server that is in
... Show MoreThe current research seeks to identify the impact of Barman's model on acquisition of the concepts of jurisprudence of worship among students of the departments of Qur'anic sciences. To achieve the research objectives, the researcher relied on the experimental method through a design with partial control by relying on two experimental groups receiving teaching by using the Barman model and a control group receives teaching through the normal method. After applying the experiment, the study reached the following results: Students of the Department of Qur’anic Sciences in general need educational programs linked to the curriculum and built on scientific foundations, according to their needs and problems (psychological, cognitive, and soc
... Show MoreAbstract
The current research aims to identify the effect of using a model of generative learning in the achievement of first-middle students of chemical concepts in science. The researcher adopted the null hypothesis, which is there is no statistically significant difference at the level (0.05) between the mean scores of the experimental group who study using the generative learning model and the average scores of the control group who study using the traditional method in the chemical concepts achievement test. The research consisted of (200) students of the first intermediate at Al-Farqadin Intermediate School for Boys affiliated with the Directorate of General Education in Baghdad Governorate / Al-Karkh 3 wit
... Show MoreImage quality has been estimated and predicted using the signal to noise ratio (SNR). The purpose of this study is to investigate the relationships between body mass index (BMI) and SNR measurements in PET imaging using patient studies with liver cancer. Three groups of 59 patients (24 males and 35 females) were divided according to BMI. After intravenous injection of 0.1 mCi of 18F-FDG per kilogram of body weight, PET emission scans were acquired for (1, 1.5, and 3) min/bed position according to the weight of patient. Because liver is an organ of homogenous metabolism, five region of interest (ROI) were made at the same location, five successive slices of the PET/CT scans to determine the mean uptake (signal) values and its standard deviat
... Show MoreThe increased size of grayscale images or upscale plays a central role in various fields such as medicine, satellite imagery, and photography. This paper presents a technique for improving upscaling gray images using a new mixing wavelet generation by tensor product. The proposed technique employs a multi-resolution analysis provided by a new mixing wavelet transform algorithm to decompose the input image into different frequency components. After processing, the low-resolution input image is effectively transformed into a higher-resolution representation by adding a zeroes matrix. Discrete wavelets transform (Daubechies wavelet Haar) as a 2D matrix is used but is mixed using tensor product with another wavelet matrix’s size. MATLAB R2021
... Show MoreDeepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this asp
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreThis study investigates the influence of five nanomaterials nano-alumina (NA), nano-silica (NS), nano-titanium (NT), nano-zinc oxide (NZ), and carbon nanotubes (CNT)on enhancing the fatigue resistance of asphalt binders. NA, NS, and NT were incorporated at dosages of 2%, 4%, 6%, 8%, and 10%, while NZ and CNT were added at 1%, 2%, 3%, 4%, and 5%. A series of physical, rheological, and performance-based tests were conducted, including penetration, softening point, ductility, and rotational viscosity. Based on the outcomes of the overall desirability evaluation, the first three dosages of each nanomaterial were selected for further testing due to their superior workability and binder flexibility. Subsequent investigations included the high-tem
... Show More