Pathology reports are necessary for specialists to make an appropriate diagnosis of diseases in general and blood diseases in particular. Therefore, specialists check blood cells and other blood details. Thus, to diagnose a disease, specialists must analyze the factors of the patient’s blood and medical history. Generally, doctors have tended to use intelligent agents to help them with CBC analysis. However, these agents need analytical tools to extract the parameters (CBC parameters) employed in the prediction of the development of life-threatening bacteremia and offer prognostic data. Therefore, this paper proposes an enhancement to the Rabin–Karp algorithm and then mixes it with the fuzzy ratio to make this algorithm suitable for working with CBC test data. The selection of these algorithms was performed after evaluating the utility of various string matching algorithms in order to choose the best ones to establish an accurate text collection tool to be a baseline for building a general report on patient information. The proposed method includes several basic steps: Firstly, the CBC-driven parameters are extracted using an efficient method for retrieving data information from pdf files or images of the CBC tests. This will be performed by implementing 12 traditional string matching algorithms, then finding the most effective ways based on the implementation results, and, subsequently, introducing a hybrid approach to address the shortcomings or issues in those methods to discover a more effective and faster algorithm to perform the analysis of the pathological tests. The proposed algorithm (Razy) was implemented using the Rabin algorithm and the fuzzy ratio method. The results show that the proposed algorithm is fast and efficient, with an average accuracy of 99.94% when retrieving the results. Moreover, we can conclude that the string matching algorithm is a crucial tool in the report analysis process that directly affects the efficiency of the analytical system.
Despite scholars’ attention on the typology of modality as a linguistic phenomenon, yet the use of modality across varieties of English is not well visible in communication-based researches that take semantics, pragmatics and discourse issues as the objects for their investigation. The paper generates its data from six M. A. dissertations from Nigerian University and equal number of the M. A. dissertations from Iraqi University to qualitatively and quantitatively investigate the contextual use of modality within the pragmatic perspective. The data analysis reveals that modality such as usuality, potentiality, necessity, probability and obligation in the dissertations encapsulates interpersonal and authorial voice in which the mean
... Show MoreThe article provides a comparative analysis of comparisons in Russian and Arabic, aimed at identifying their structural, typological, and functional-pragmatic features. The study is based on a systematic approach to the analysis of linguistic means of expressing comparisons in two differ- ent linguistic cultures. The article analyzes the main structural components of comparisons, their classification, and their cognitive and aesthetic functions. The results of the study demonstrate the deep cultural conditioning of comparative constructions and their important role in representing the specific features of the respective linguistic cultures.
Today, there are large amounts of geospatial data available on the web such as Google Map (GM), OpenStreetMap (OSM), Flickr service, Wikimapia and others. All of these services called open source geospatial data. Geospatial data from different sources often has variable accuracy due to different data collection methods; therefore data accuracy may not meet the user requirement in varying organization. This paper aims to develop a tool to assess the quality of GM data by comparing it with formal data such as spatial data from Mayoralty of Baghdad (MB). This tool developed by Visual Basic language, and validated on two different study areas in Baghdad / Iraq (Al-Karada and Al- Kadhumiyah). The positional accuracy was asses
... Show MoreThe background subtraction is a leading technique adopted for detecting the moving objects in video surveillance systems. Various background subtraction models have been applied to tackle different challenges in many surveillance environments. In this paper, we propose a model of pixel-based color-histogram and Fuzzy C-means (FCM) to obtain the background model using cosine similarity (CS) to measure the closeness between the current pixel and the background model and eventually determine the background and foreground pixel according to a tuned threshold. The performance of this model is benchmarked on CDnet2014 dynamic scenes dataset using statistical metrics. The results show a better performance against the state-of the art
... Show MoreThe ability of the human brain to communicate with its environment has become a reality through the use of a Brain-Computer Interface (BCI)-based mechanism. Electroencephalography (EEG) has gained popularity as a non-invasive way of brain connection. Traditionally, the devices were used in clinical settings to detect various brain diseases. However, as technology advances, companies such as Emotiv and NeuroSky are developing low-cost, easily portable EEG-based consumer-grade devices that can be used in various application domains such as gaming, education. This article discusses the parts in which the EEG has been applied and how it has proven beneficial for those with severe motor disorders, rehabilitation, and as a form of communi
... Show MoreUsing watermarking techniques and digital signatures can better solve the problems of digital images transmitted on the Internet like forgery, tampering, altering, etc. In this paper we proposed invisible fragile watermark and MD-5 based algorithm for digital image authenticating and tampers detecting in the Discrete Wavelet Transform DWT domain. The digital image is decomposed using 2-level DWT and the middle and high frequency sub-bands are used for watermark and digital signature embedding. The authentication data are embedded in number of the coefficients of these sub-bands according to the adaptive threshold based on the watermark length and the coefficients of each DWT level. These sub-bands are used because they a
... Show MoreIn this paper, the goal of proposed method is to protect data against different types of attacks by unauthorized parties. The basic idea of proposed method is generating a private key from a specific features of digital color image such as color (Red, Green and Blue); the generating process of private key from colors of digital color image performed via the computing process of color frequencies for blue color of an image then computing the maximum frequency of blue color, multiplying it by its number and adding process will performed to produce a generated key. After that the private key is generated, must be converting it into the binary representation form. The generated key is extracted from blue color of keyed image then we selects a c
... Show MoreThe main challenge is to protect the environment from future deterioration due to pollution and the lack of natural resources. Therefore, one of the most important things to pay attention to and get rid of its negative impact is solid waste. Solid waste is a double-edged sword according to the way it is dealt with, as neglecting it causes a serious environmental risk from water, air and soil pollution, while dealing with it in the right way makes it an important resource in preserving the environment. Accordingly, the proper management of solid waste and its reuse or recycling is the most important factor. Therefore, attention has been drawn to the use of solid waste in different ways, and the most common way is to use it as an alternative
... Show MoreThe choice of binary Pseudonoise (PN) sequences with specific properties, having long period high complexity, randomness, minimum cross and auto- correlation which are essential for some communication systems. In this research a nonlinear PN generator is introduced . It consists of a combination of basic components like Linear Feedback Shift Register (LFSR), ?-element which is a type of RxR crossbar switches. The period and complexity of a sequence which are generated by the proposed generator are computed and the randomness properties of these sequences are measured by well-known randomness tests.
Face recognition is a crucial biometric technology used in various security and identification applications. Ensuring accuracy and reliability in facial recognition systems requires robust feature extraction and secure processing methods. This study presents an accurate facial recognition model using a feature extraction approach within a cloud environment. First, the facial images undergo preprocessing, including grayscale conversion, histogram equalization, Viola-Jones face detection, and resizing. Then, features are extracted using a hybrid approach that combines Linear Discriminant Analysis (LDA) and Gray-Level Co-occurrence Matrix (GLCM). The extracted features are encrypted using the Data Encryption Standard (DES) for security
... Show More