There is a great deal of systems dealing with image processing that are being used and developed on a daily basis. Those systems need the deployment of some basic operations such as detecting the Regions of Interest and matching those regions, in addition to the description of their properties. Those operations play a significant role in decision making which is necessary for the next operations depending on the assigned task. In order to accomplish those tasks, various algorithms have been introduced throughout years. One of the most popular algorithms is the Scale Invariant Feature Transform (SIFT). The efficiency of this algorithm is its performance in the process of detection and property description, and that is due to the fact that it operates on a big number of key-points, the only drawback it has is that it is rather time consuming. In the suggested approach, the system deploys SIFT to perform its basic tasks of matching and description is focused on minimizing the number of key-points which is performed via applying Fast Approximate Nearest Neighbor algorithm, which will reduce the redundancy of matching leading to speeding up the process. The proposed application has been evaluated in terms of two criteria which are time and accuracy, and has accomplished a percentage of accuracy of up to 100%, in addition to speeding up the processes of matching and description.
Heart disease is a significant and impactful health condition that ranks as the leading cause of death in many countries. In order to aid physicians in diagnosing cardiovascular diseases, clinical datasets are available for reference. However, with the rise of big data and medical datasets, it has become increasingly challenging for medical practitioners to accurately predict heart disease due to the abundance of unrelated and redundant features that hinder computational complexity and accuracy. As such, this study aims to identify the most discriminative features within high-dimensional datasets while minimizing complexity and improving accuracy through an Extra Tree feature selection based technique. The work study assesses the efficac
... Show MoreInthisstudy,FourierTransformInfraredSpectrophotometry(FTIR),XRay Diffraction(XRD)andlossonignition(LOI),comparativelyemployedtoprovideaquick,relativelyinexpensiveandefficientmethodforidentifyingandquantifyingcalcitecontentofphosphateoresamplestakenfromAkashatsiteinIraq.Acomprehensivespectroscopicstudyofphosphate-calcitesystemwasreportedfirstintheMid-IRspectra(4004000cm-1)usingShimadzuIRAffinity-1,fordifferentcutsofphosphatefieldgradeswithsamplesbeneficiatedusingcalcinationandleachingwithorganicacidatdifferenttemperatures.Thenusingtheresultedspectratocreateacalibrationcurverelatesmaterialconcentrationstotheintensity(peaks)ofFTIRabsorbanceandappliesthiscalibrationtospecifyphosphate-calcitecontentinIraqicalcareousphosphateore.Theirpeakswereass
... Show MoreThis paper is concerned with combining two different transforms to present a new joint transform FHET and its inverse transform IFHET. Also, the most important property of FHET was concluded and proved, which is called the finite Hankel – Elzaki transforms of the Bessel differential operator property, this property was discussed for two different boundary conditions, Dirichlet and Robin. Where the importance of this property is shown by solving axisymmetric partial differential equations and transitioning to an algebraic equation directly. Also, the joint Finite Hankel-Elzaki transform method was applied in solving a mathematical-physical problem, which is the Hotdog Problem. A steady state which does not depend on time was discussed f
... Show MoreImage compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurre
... Show MoreIn this study, different methods were used for estimating location parameter and scale parameter for extreme value distribution, such as maximum likelihood estimation (MLE) , method of moment estimation (ME),and approximation estimators based on percentiles which is called white method in estimation, as the extreme value distribution is one of exponential distributions. Least squares estimation (OLS) was used, weighted least squares estimation (WLS), ridge regression estimation (Rig), and adjusted ridge regression estimation (ARig) were used. Two parameters for expected value to the percentile as estimation for distribution f
... Show MoreToday in the digital realm, where images constitute the massive resource of the social media base but unfortunately suffer from two issues of size and transmission, compression is the ideal solution. Pixel base techniques are one of the modern spatially optimized modeling techniques of deterministic and probabilistic bases that imply mean, index, and residual. This paper introduces adaptive pixel-based coding techniques for the probabilistic part of a lossy scheme by incorporating the MMSA of the C321 base along with the utilization of the deterministic part losslessly. The tested results achieved higher size reduction performance compared to the traditional pixel-based techniques and the standard JPEG by about 40% and 50%,
... Show MoreFour simply supported reinforced concrete (RC) beams were test experimentaly and analyzed using the extended finite element method (XFEM). This method is used to treat the discontinuities resulting from the fracture process and crack propagation in that occur in concrete. The Meso-Scale Approach (MSA) used to model concrete as a heterogenous material consists of a three-phasic material (coarse aggregate, mortar, and air voids in the cement paste). The coarse aggregate that was used in the casting of these beams rounded and crashed aggregate shape with maximum size of 20 mm. The compressive strength used in these beams is equal to 17 MPa and 34 MPa, respectively. These RC beams are designed to fail due to flexure when subjected to lo
... Show MoreStatistical methods and statistical decisions making were used to arrange and analyze the primary data to get norms which are used with Geographic Information Systems (GIS) and spatial analysis programs to identify the animals production and poultry units in strategic nutrition channels, also the priorities of food insecurity through the local production and import when there is no capacity for production. The poultry production is one of the most important commodities that satisfy human body protein requirements, also the most important criteria to measure the development and prosperity of nations. The poultry fields of Babylon Governorate are located in Abi Ghareg and Al_Kifil centers according to many criteria or factors such as the popu
... Show More