String matching is seen as one of the essential problems in computer science. A variety of computer applications provide the string matching service for their end users. The remarkable boost in the number of data that is created and kept by modern computational devices influences researchers to obtain even more powerful methods for coping with this problem. In this research, the Quick Search string matching algorithm are adopted to be implemented under the multi-core environment using OpenMP directive which can be employed to reduce the overall execution time of the program. English text, Proteins and DNA data types are utilized to examine the effect of parallelization and implementation of Quick Search string matching algorithm on multi-core based environment. Experimental outcomes reveal that the overall performance of the mentioned string matching algorithm has been improved, and the improvement in the execution time which has been obtained is considerable enough to recommend the multi-core environment as the suitable platform for parallelizing the Quick Search string matching algorithm.
In information security, fingerprint verification is one of the most common recent approaches for verifying human identity through a distinctive pattern. The verification process works by comparing a pair of fingerprint templates and identifying the similarity/matching among them. Several research studies have utilized different techniques for the matching process such as fuzzy vault and image filtering approaches. Yet, these approaches are still suffering from the imprecise articulation of the biometrics’ interesting patterns. The emergence of deep learning architectures such as the Convolutional Neural Network (CNN) has been extensively used for image processing and object detection tasks and showed an outstanding performance compare
... Show More This paper introduces a relation between resultant and the Jacobian determinant
by generalizing Sakkalis theorem from two polynomials in two variables to the case of (n) polynomials in (n) variables. This leads us to study the results of the type: , and use this relation to attack the Jacobian problem. The last section shows our contribution to proving the conjecture.
Data-driven models perform poorly on part-of-speech tagging problems with the square Hmong language, a low-resource corpus. This paper designs a weight evaluation function to reduce the influence of unknown words. It proposes an improved harmony search algorithm utilizing the roulette and local evaluation strategies for handling the square Hmong part-of-speech tagging problem. The experiment shows that the average accuracy of the proposed model is 6%, 8% more than HMM and BiLSTM-CRF models, respectively. Meanwhile, the average F1 of the proposed model is also 6%, 3% more than HMM and BiLSTM-CRF models, respectively.
To expedite the learning process, a group of algorithms known as parallel machine learning algorithmscan be executed simultaneously on several computers or processors. As data grows in both size andcomplexity, and as businesses seek efficient ways to mine that data for insights, algorithms like thesewill become increasingly crucial. Data parallelism, model parallelism, and hybrid techniques are justsome of the methods described in this article for speeding up machine learning algorithms. We alsocover the benefits and threats associated with parallel machine learning, such as data splitting,communication, and scalability. We compare how well various methods perform on a variety ofmachine learning tasks and datasets, and we talk abo
... Show MoreThe Light and the Dark is the fourth novel in a series written by Charles Percy Snow where it tackles a phase of gifted scholar and remarkable individual Roy Calvert as he search for a source of power and meaning in life to relieve his inner turmoil. The character Roy Calvert is based on Snow's friend, Charles Allbery who exposes the message the character of Roy intends to convey in a certain phase of his life and the prophecy the novel carries amid catastrophe so widespread in the thirties of the twentieth century
he assignment model represents a mathematical model that aims at expressing an important problem facing enterprises and companies in the public and private sectors, which are characterized by ensuring their activities, in order to take the appropriate decision to get the best allocation of tasks for machines or jobs or workers on the machines that he owns in order to increase profits or reduce costs and time As this model is called multi-objective assignment because it takes into account the factors of time and cost together and hence we have two goals for the assignment problem, so it is not possible to solve by the usual methods and has been resorted to the use of multiple programming The objectives were to solve the problem of
... Show MoreThe logistic regression model is an important statistical model showing the relationship between the binary variable and the explanatory variables. The large number of explanations that are usually used to illustrate the response led to the emergence of the problem of linear multiplicity between the explanatory variables that make estimating the parameters of the model not accurate.
... Show More
He research specifies subjects which may contribute in improve productivity of the General Company for vegetable oil product/ Al-Farab factory and aims to release the relationship between system Quick Response Manufacturing (QRM) and scheduling operations.
The Implementation was in the general company for vegetable oil product (Al-Farab factory), Universe Factory It suffers from a failure to follow Scheduling in its operations And not taking into account the lead times And delays in product delivery dates, Here are drawing the attention of the administration in the factory to use Quick Response Manufacturing (QRM) to control the energy and inventory, machin
... Show MoreExcessive skewness which occurs sometimes in the data is represented as an obstacle against normal distribution. So, recent studies have witnessed activity in studying the skew-normal distribution (SND) that matches the skewness data which is regarded as a special case of the normal distribution with additional skewness parameter (α), which gives more flexibility to the normal distribution. When estimating the parameters of (SND), we face the problem of the non-linear equation and by using the method of Maximum Likelihood estimation (ML) their solutions will be inaccurate and unreliable. To solve this problem, two methods can be used that are: the genetic algorithm (GA) and the iterative reweighting algorithm (IR) based on the M
... Show More