Preferred Language
Articles
/
ijs-4333
Feature Extraction in Six Blocks to Detect and Recognize English Numbers
...Show More Authors

    The Fuzzy Logic method was implemented to detect and recognize English numbers in this paper. The extracted features within this method make the detection easy and accurate. These features depend on the crossing point of two vertical lines with one horizontal line to be used from the Fuzzy logic method, as shown by the Matlab code in this study. The font types are Times New Roman, Arial, Calabria, Arabic, and Andalus with different font sizes of 10, 16, 22, 28, 36, 42, 50 and 72. These numbers are isolated automatically with the designed algorithm, for which the code is also presented. The number’s image is tested with the Fuzzy algorithm depending on six-block properties only. Groups of regions (High, Medium, and Low) for each number showed unique behavior to recognize any number. Normalized Absolute Error (NAE) equation was used to evaluate the error percentage for the suggested algorithm. The lowest error was 0.001% compared with the real number. The data were checked by the support vector machine (SVM) algorithm to confirm the quality and the efficiency of the suggested method, where the matching was found to be 100% between the data of the suggested method and SVM. The six properties offer a new method to build a rule-based feature extraction technique in different applications and detect any text recognition with a low computational cost.

Scopus Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Fri Jul 01 2022
Journal Name
Iraqi Journal Of Science
Extractive Multi-Document Text Summarization Using Multi-Objective Evolutionary Algorithm Based Model
...Show More Authors

Automatic document summarization technology is evolving and may offer a solution to the problem of information overload. Multi-document summarization is an optimization problem demanding optimizing more than one objective function concurrently. The proposed work considers a balance of two significant objectives: content coverage and diversity while generating a summary from a collection of text documents. Despite the large efforts introduced from several researchers for designing and evaluating performance of many text summarization techniques, their formulations lack the introduction of any model that can give an explicit representation of – coverage and diversity – the two contradictory semantics of any summary. The design of gener

... Show More
View Publication Preview PDF
Publication Date
Mon Sep 25 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
On Double Stage Shrinkage Estimator For the Variance of Normal Distribution With Unknown Mean
...Show More Authors

     This paper is concerned with preliminary test double stage shrinkage estimators to estimate the variance (s2) of normal distribution when a prior estimate  of the actual value (s2) is a available when the mean is unknown  , using specifying shrinkage weight factors y(×) in addition to pre-test region (R).

      Expressions for the Bias, Mean squared error [MSE (×)], Relative Efficiency [R.EFF (×)], Expected sample size [E(n/s2)] and percentage of overall sample saved of proposed estimator were derived. Numerical results (using MathCAD program) and conclusions are drawn about selection of different constants including in the me

... Show More
View Publication Preview PDF
Publication Date
Sat Jan 01 2022
Journal Name
Turkish Journal Of Physiotherapy And Rehabilitation
classification coco dataset using machine learning algorithms
...Show More Authors

In this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho

... Show More
Publication Date
Thu Apr 28 2022
Journal Name
Iraqi Journal Of Science
A Load Balancing Scheme for a Server Cluster Using History Results
...Show More Authors

Load balancing in computer networks is one of the most subjects that has got researcher's attention in the last decade. Load balancing will lead to reduce processing time and memory usage that are the most two concerns of the network companies in now days, and they are the most two factors that determine if the approach is worthy applicable or not. There are two kinds of load balancing, distributing jobs among other servers before processing starts and stays at that server to the end of the process is called static load balancing, and moving jobs during processing is called dynamic load balancing. In this research, two algorithms are designed and implemented, the History Usage (HU) algorithm that statically balances the load of a Loaded

... Show More
View Publication Preview PDF
Publication Date
Wed Apr 25 2018
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Different Estimation Methods for System Reliability Multi-Components model: Exponentiated Weibull Distribution
...Show More Authors

        In this paper, estimation of system reliability of the multi-components in stress-strength model R(s,k) is considered, when the stress and strength are independent random variables and follows the Exponentiated Weibull Distribution (EWD) with known first shape parameter θ and, the second shape parameter α is unknown using different estimation methods. Comparisons among the proposed estimators through  Monte Carlo simulation technique were made depend on mean squared error (MSE)  criteria

View Publication Preview PDF
Crossref (1)
Crossref
Publication Date
Sat Dec 30 2023
Journal Name
Journal Of Economics And Administrative Sciences
The Cluster Analysis by Using Nonparametric Cubic B-Spline Modeling for Longitudinal Data
...Show More Authors

Longitudinal data is becoming increasingly common, especially in the medical and economic fields, and various methods have been analyzed and developed to analyze this type of data.

In this research, the focus was on compiling and analyzing this data, as cluster analysis plays an important role in identifying and grouping co-expressed subfiles over time and employing them on the nonparametric smoothing cubic B-spline model, which is characterized by providing continuous first and second derivatives, resulting in a smoother curve with fewer abrupt changes in slope. It is also more flexible and can pick up on more complex patterns and fluctuations in the data.

The longitudinal balanced data profile was compiled into subgroup

... Show More
View Publication Preview PDF
Crossref
Publication Date
Thu Mar 30 2023
Journal Name
Journal Of Economics And Administrative Sciences
Comparison of Some Methods for Estimating Nonparametric Binary Logistic Regression
...Show More Authors

In this research, the methods of Kernel estimator (nonparametric density estimator) were relied upon in estimating the two-response logistic regression, where the comparison was used between the method of Nadaraya-Watson and the method of Local Scoring algorithm, and optimal Smoothing parameter λ was estimated by the methods of Cross-validation and generalized Cross-validation, bandwidth optimal λ has a clear effect in the estimation process. It also has a key role in smoothing the curve as it approaches the real curve, and the goal of using the Kernel estimator is to modify the observations so that we can obtain estimators with characteristics close to the properties of real parameters, and based on medical data for patients with chro

... Show More
View Publication Preview PDF
Crossref
Publication Date
Thu Aug 01 2019
Journal Name
Journal Of Economics And Administrative Sciences
مقارنة مقدرات بيز لدالة المعولية لتوزيع باريتو من النوع الاول باستعمال دوال معلوماتية مضاعفة مختلفة
...Show More Authors

The comparison of double informative priors which are assumed for the reliability function of Pareto type I distribution. To estimate the reliability function of Pareto type I distribution by using Bayes estimation, will be  used two different kind of information in the Bayes estimation; two different priors have been selected for the parameter of Pareto  type I distribution . Assuming distribution of three double prior’s chi- gamma squared distribution, gamma - erlang distribution, and erlang- exponential distribution as double priors. The results of the derivaties of these estimators under the squared error loss function with two different double priors. Using the simulation technique, to compare the performance for

... Show More
View Publication Preview PDF
Crossref
Publication Date
Sat Feb 22 2025
Journal Name
Journal Of Studies And Researches Of Sport Education
The effect of using the educational bag on the level of learning some offensive skills with the epee weapon
...Show More Authors

View Publication