The study focuses on assessment of the quality of some image enhancement methods which were implemented on renal X-ray images. The enhancement methods included Imadjust, Histogram Equalization (HE) and Contrast Limited Adaptive Histogram Equalization (CLAHE). The images qualities were calculated to compare input images with output images from these three enhancement techniques. An eight renal x-ray images are collected to perform these methods. Generally, the x-ray images are lack of contrast and low in radiation dosage. This lack of image quality can be amended by enhancement process. Three quality image factors were done to assess the resulted images involved (Naturalness Image Quality Evaluator (NIQE), Perception based Image Quality Evaluator (PIQE) and Blind References Image Spatial Quality Evaluator (BRISQE)). The quality of images had been heightened by these methods to support the goals of diagnosis. The results of the chosen enhancement methods of collecting images reflected more qualified images than the original images. According to the results of the quality factors and the assessment of radiology experts, the CLAHE method was the best enhancement method.
The research aims to review the concepts of banking efficiency and its relationship to performance, productivity and efficiency, as well as analyze the efficiency of the banking in micro-economic view.
In order to achieve the objectives of the research We have been employed graphic, Econometrics and Mathematical methods to derive the different concepts of banking efficiency.
We showed that there are two main methods used to measure the bank efficiency, the first called Stochastic Frontier Analysis , this technique depends on the parametric methods, The other method is called Data Envelopment Analysis is based on mathematical programming methods
The Dagum Regression Model, introduced to address limitations in traditional econometric models, provides enhanced flexibility for analyzing data characterized by heavy tails and asymmetry, which is common in income and wealth distributions. This paper develops and applies the Dagum model, demonstrating its advantages over other distributions such as the Log-Normal and Gamma distributions. The model's parameters are estimated using Maximum Likelihood Estimation (MLE) and the Method of Moments (MoM). A simulation study evaluates both methods' performance across various sample sizes, showing that MoM tends to offer more robust and precise estimates, particularly in small samples. These findings provide valuable insights into the ana
... Show MoreThe area of character recognition has received a considerable attention by researchers all over the world during the last three decades. However, this research explores best sets of feature extraction techniques and studies the accuracy of well-known classifiers for Arabic numeral using the Statistical styles in two methods and making comparison study between them. First method Linear Discriminant function that is yield results with accuracy as high as 90% of original grouped cases correctly classified. In the second method, we proposed algorithm, The results show the efficiency of the proposed algorithms, where it is found to achieve recognition accuracy of 92.9% and 91.4%. This is providing efficiency more than the first method.
Arabic calligraphy is one of the ancient arts rooted in history, And that he grew up conflicting views and writings addressed as a, communication tool for the linguistic The teaching calligraphy note an art and science because it depends on the fixed assets and precise rules in his art because centered Beauty It targets teach Arabic calligraphy speed as the education and recitation helps to write fast Which have great interest in the field of education and in life both Also accompanied Arabic calligraphy and scientific renaissance significant knowledge in the Ara
... Show MoreMedical image segmentation is one of the most actively studied fields in the past few decades, as the development of modern imaging modalities such as magnetic resonance imaging (MRI) and computed tomography (CT), physicians and technicians nowadays have to process the increasing number and size of medical images. Therefore, efficient and accurate computational segmentation algorithms become necessary to extract the desired information from these large data sets. Moreover, sophisticated segmentation algorithms can help the physicians delineate better the anatomical structures presented in the input images, enhance the accuracy of medical diagnosis and facilitate the best treatment planning. Many of the proposed algorithms could perform w
... Show MoreWe explore the transform coefficients of fractal and exploit new method to improve the compression capabilities of these schemes. In most of the standard encoder/ decoder systems the quantization/ de-quantization managed as a separate step, here we introduce new way (method) to work (managed) simultaneously. Additional compression is achieved by this method with high image quality as you will see later.
Image retrieval is used in searching for images from images database. In this paper, content – based image retrieval (CBIR) using four feature extraction techniques has been achieved. The four techniques are colored histogram features technique, properties features technique, gray level co- occurrence matrix (GLCM) statistical features technique and hybrid technique. The features are extracted from the data base images and query (test) images in order to find the similarity measure. The similarity-based matching is very important in CBIR, so, three types of similarity measure are used, normalized Mahalanobis distance, Euclidean distance and Manhattan distance. A comparison between them has been implemented. From the results, it is conclud
... Show MoreGroupwise non-rigid image alignment is a difficult non-linear optimization problem involving many parameters and often large datasets. Previous methods have explored various metrics and optimization strategies. Good results have been previously achieved with simple metrics, requiring complex optimization, often with many unintuitive parameters that require careful tuning for each dataset. In this chapter, the problem is restructured to use a simpler, iterative optimization algorithm, with very few free parameters. The warps are refined using an iterative Levenberg-Marquardt minimization to the mean, based on updating the locations of a small number of points and incorporating a stiffness constraint. This optimization approach is eff
... Show More