Abstract— The growing use of digital technologies across various sectors and daily activities has made handwriting recognition a popular research topic. Despite the continued relevance of handwriting, people still require the conversion of handwritten copies into digital versions that can be stored and shared digitally. Handwriting recognition involves the computer's strength to identify and understand legible handwriting input data from various sources, including document, photo-graphs and others. Handwriting recognition pose a complexity challenge due to the diversity in handwriting styles among different individuals especially in real time applications. In this paper, an automatic system was designed to handwriting recognition using the recent artificial intelligent algorithms, the conventional neural network (CNN). Different CNN models were tested and modified to produce a system has two important features high performance accuracy and less testing time. These features are the most important factors for real time applications. The experimental results were conducted on a dataset includes over 400,000 handwritten names; the best performance accuracy results were 99.8% for SqueezeNet model.
Enhanced oil recovery is used in many mature oil reservoirs to increase the oil recovery factor. Surfactant flooding has recently gained interest again. To create micro emulsions at the interface between crude oil and water, surfactant flooding is the injection of surfactants (and co-surfactants) into the reservoir, thus achieving very low interfacial tension, which consequently assists mobilize the trapped oil.
In this study a flooding system, which has been manufactured and described at high pressure. The flooding processes included oil, water and surfactants. 15 core holders has been prepared at first stage of the experiment and filled with washed sand grains 80-500 mm and then packing the sand to obtain sand packs
... Show MoreNumerical simulations are carried out to assess the quality of the circular and square apodize apertures in observing extrasolar planets. The logarithmic scale of the normalized point spread function of these apertures showed sharp decline in the radial frequency components reaching to 10-36 and 10-34 respectively and demonstrating promising results. This decline is associated with an increase in the full width of the point spread function. A trade off must be done between this full width and the radial frequency components to overcome the problem of imaging extrasolar planets.
In this research, several estimators concerning the estimation are introduced. These estimators are closely related to the hazard function by using one of the nonparametric methods namely the kernel function for censored data type with varying bandwidth and kernel boundary. Two types of bandwidth are used: local bandwidth and global bandwidth. Moreover, four types of boundary kernel are used namely: Rectangle, Epanechnikov, Biquadratic and Triquadratic and the proposed function was employed with all kernel functions. Two different simulation techniques are also used for two experiments to compare these estimators. In most of the cases, the results have proved that the local bandwidth is the best for all the
... Show MoreThe railways network is one of the huge infrastructure projects. Therefore, dealing with these projects such as analyzing and developing should be done using appropriate tools, i.e. GIS tools. Because, traditional methods will consume resources, time, money and the results maybe not accurate. In this research, the train stations in all of Iraq’s provinces were studied and analyzed using network analysis, which is one of the most powerful techniques within GIS. A free trial copy of ArcGIS®10.2 software was used in this research in order to achieve the aim of this study. The analysis of current train stations has been done depending on the road network, because people used roads to reach those train stations. The data layers for this st
... Show More
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show MoreHighly Modified Asphalt (HiMA) binders have garnered significant attention due to their superior resistance to rutting, fatigue cracking, and thermal distress under heavy traffic loads and extreme environmental conditions. While elastomeric polymers such as Styrene- Butadiene-Styrene (SBS) have been extensively used in HiMA applications, the potential of plastomeric polymers, including Polyethylene (PE) and Ethylene Vinyl Acetate (EVA), remains largely unexplored. This study aims to evaluate the performance of reference binder (RB) modified with plastomeric HiMA asphalt in comparison to SBS-modified binders and determine the optimal polymer dosage for achieving an optimal balance between rutting resistance and fatigue durability. The experi
... Show MoreThe distribution of the expanded exponentiated power function EEPF with four parameters, was presented by the exponentiated expanded method using the expanded distribution of the power function, This method is characterized by obtaining a new distribution belonging to the exponential family, as we obtained the survival rate and failure rate function for this distribution, Some mathematical properties were found, then we used the developed least squares method to estimate the parameters using the genetic algorithm, and a Monte Carlo simulation study was conducted to evaluate the performance of estimations of possibility using the Genetic algorithm GA.
Autism Spectrum Disorder, also known as ASD, is a neurodevelopmental disease that impairs speech, social interaction, and behavior. Machine learning is a field of artificial intelligence that focuses on creating algorithms that can learn patterns and make ASD classification based on input data. The results of using machine learning algorithms to categorize ASD have been inconsistent. More research is needed to improve the accuracy of the classification of ASD. To address this, deep learning such as 1D CNN has been proposed as an alternative for the classification of ASD detection. The proposed techniques are evaluated on publicly available three different ASD datasets (children, Adults, and adolescents). Results strongly suggest that 1D
... Show More