Face detection systems are based on the assumption that each individual has a unique face structure and that computerized face matching is possible using facial symmetry. Face recognition technology has been employed for security purposes in many organizations and businesses throughout the world. This research examines the classifications in machine learning approaches using feature extraction for the facial image detection system. Due to its high level of accuracy and speed, the Viola-Jones method is utilized for facial detection using the MUCT database. The LDA feature extraction method is applied as an input to three algorithms of machine learning approaches, which are the J48, OneR, and JRip classifiers. The experiment’s result indicates that the J48 classifier with LDA achieves the highest performance with 96.0001% accuracy.
In this paper, a simple medical image compression technique is proposed, that based on utilizing the residual of autoregressive model (AR) along with bit-plane slicing (BPS) to exploit the spatial redundancy efficiently. The results showed that the compression performance of the proposed techniques is improved about twice on average compared to the traditional autoregressive, along with preserving the image quality due to considering the significant layers only of high image contribution effects.
The research aimed at identifying the effect of using constructive learning model on academic achievement and learning soccer dribbling Skill in 2nd grade secondary school students. The researcher used the experimental method on (30) secondary school students; 10 selected for pilot study, 20 were divided into two groups. The experimental group followed constructive learning model while the controlling group followed the traditional method. The experimental program lasted for eight weeks with two teaching sessions per week for each group. The data was collected and treated using SPSS to conclude the positive effect of using constructive learning model on developing academic achievement and learning soccer dribbling Skill in 2nd grade seconda
... Show MoreProblem: Cancer is regarded as one of the world's deadliest diseases. Machine learning and its new branch (deep learning) algorithms can facilitate the way of dealing with cancer, especially in the field of cancer prevention and detection. Traditional ways of analyzing cancer data have their limits, and cancer data is growing quickly. This makes it possible for deep learning to move forward with its powerful abilities to analyze and process cancer data. Aims: In the current study, a deep-learning medical support system for the prediction of lung cancer is presented. Methods: The study uses three different deep learning models (EfficientNetB3, ResNet50 and ResNet101) with the transfer learning concept. The three models are trained using a
... Show MoreIn the recent decade, injection of nanoparticles (NPs) into underground formation as liquid nanodispersions has been suggested as a smart alternative for conventional methods in tertiary oil recovery projects from mature oil reservoirs. Such reservoirs, however, are strong candidates for carbon geo-sequestration (CGS) projects, and the presence of nanoparticles (NPs) after nanofluid-flooding can add more complexity to carbon geo-storage projects. Despite studies investigating CO2 injection and nanofluid-flooding for EOR projects, no information was reported about the potential synergistic effects of CO2 and NPs on enhanced oil recovery (EOR) and CGS concerning the interfacial tension (γ) of CO2-oil system. This study thus extensively inves
... Show MorePlagiarism is described as using someone else's ideas or work without their permission. Using lexical and semantic text similarity notions, this paper presents a plagiarism detection system for examining suspicious texts against available sources on the Web. The user can upload suspicious files in pdf or docx formats. The system will search three popular search engines for the source text (Google, Bing, and Yahoo) and try to identify the top five results for each search engine on the first retrieved page. The corpus is made up of the downloaded files and scraped web page text of the search engines' results. The corpus text and suspicious documents will then be encoded as vectors. For lexical plagiarism detection, the system will
... Show MoreBackground: Survey of the occlusion in population groups usually include in their objections the academic assessment of occlusal feature, the planning resources for public health treatment programmers, the comparison of different population and the screening of groups for orthodontic treatment. Likewise a thorough investigation of the occurrence of malocclusions among school–students would be of major importance in the planning of orthodontic treatment in the public dental health services. For this purpose it is necessary to have detailed information on the prevalence of individual malocclusion among boys and girls at different ages distributed regionally, and moreover, an analysis of the need for orthodontic treatment in the different sc
... Show MoreNatural Language Processing (NLP) deals with analysing, understanding and generating languages likes human. One of the challenges of NLP is training computers to understand the way of learning and using a language as human. Every training session consists of several types of sentences with different context and linguistic structures. Meaning of a sentence depends on actual meaning of main words with their correct positions. Same word can be used as a noun or adjective or others based on their position. In NLP, Word Embedding is a powerful method which is trained on large collection of texts and encoded general semantic and syntactic information of words. Choosing a right word embedding generates more efficient result than others
... Show More