With the increase in industry and industrial products, quantities of waste have increased worldwide, especially plastic waste, as plastic pollution is considered one of the wastes of the modern era that threatens the environment and living organisms. On this basis, a solution must be found to use this waste and recycle it safely so that it does not threaten the environment. Therefore, this research used plastic waste as an improvement material for clay soil. In this research, two types of tests were conducted, the first of which was a laboratory test, where the undrained shear strength (cohesion), compression index (Cc), and swelling index (Cr) of the improved and unimproved soils were calculated (plastic was added in proportions (0.5, 1, 1.5, 2)%. The second part of the examination was done through physical modeling, where 2% of plastic was used, considered the optimal percentage in this research, and the calculation of the carrying capacity-settlement relationship for both the improved and unimproved soils. Using this percentage of plastic showed an improvement in the relationship between the bearing capacities of soil vs. subsidence, as an increase in the amount of stress was observed from 405 KPa to 459 kPa at 10% of subsidence.
The automatic estimation of speaker characteristics, such as height, age, and gender, has various applications in forensics, surveillance, customer service, and many human-robot interaction applications. These applications are often required to produce a response promptly. This work proposes a novel approach to speaker profiling by combining filter bank initializations, such as continuous wavelets and gammatone filter banks, with one-dimensional (1D) convolutional neural networks (CNN) and residual blocks. The proposed end-to-end model goes from the raw waveform to an estimated height, age, and gender of the speaker by learning speaker representation directly from the audio signal without relying on handcrafted and pre-computed acou
... Show MoreAnomaly detection is still a difficult task. To address this problem, we propose to strengthen DBSCAN algorithm for the data by converting all data to the graph concept frame (CFG). As is well known that the work DBSCAN method used to compile the data set belong to the same species in a while it will be considered in the external behavior of the cluster as a noise or anomalies. It can detect anomalies by DBSCAN algorithm can detect abnormal points that are far from certain set threshold (extremism). However, the abnormalities are not those cases, abnormal and unusual or far from a specific group, There is a type of data that is do not happen repeatedly, but are considered abnormal for the group of known. The analysis showed DBSCAN using the
... Show MoreThis paper present the fast and robust approach of English text encryption and decryption based on Pascal matrix. The technique of encryption the Arabic or English text or both and show the result when apply this method on plain text (original message) and how will form the intelligible plain text to be unintelligible plain text in order to secure information from unauthorized access and from steel information, an encryption scheme usually uses a pseudo-random enecryption key generated by an algorithm. All this done by using Pascal matrix. Encryption and decryption are done by using MATLAB as programming language and notepad ++to write the input text.This paper present the fast and robust approach of English text encryption and decryption b
... Show MoreMany of the proposed methods introduce the perforated fin with the straight direction to improve the thermal performance of the heat sink. The innovative form of the perforated fin (with inclination angles) was considered. Present rectangular pin fins consist of elliptical perforations with two models and two cases. The signum function is used for modeling the opposite and the mutable approach of the heat transfer area. To find the general solution, the degenerate hypergeometric equation was used as a new derivative method and then solved by Kummer's series. Two validation methods (previous work and Ansys 16.0‐Steady State Thermal) are considered. The strong agreement of the validation results (0.3
Canonical correlation analysis is one of the common methods for analyzing data and know the relationship between two sets of variables under study, as it depends on the process of analyzing the variance matrix or the correlation matrix. Researchers resort to the use of many methods to estimate canonical correlation (CC); some are biased for outliers, and others are resistant to those values; in addition, there are standards that check the efficiency of estimation methods.
In our research, we dealt with robust estimation methods that depend on the correlation matrix in the analysis process to obtain a robust canonical correlation coefficient, which is the method of Biwe
... Show MoreCommunity detection is an important and interesting topic for better understanding and analyzing complex network structures. Detecting hidden partitions in complex networks is proven to be an NP-hard problem that may not be accurately resolved using traditional methods. So it is solved using evolutionary computation methods and modeled in the literature as an optimization problem. In recent years, many researchers have directed their research efforts toward addressing the problem of community structure detection by developing different algorithms and making use of single-objective optimization methods. In this study, we have continued that research line by improving the Particle Swarm Optimization (PSO) algorithm using a
... Show MoreA two time step stochastic multi-variables multi-sites hydrological data forecasting model was developed and verified using a case study. The philosophy of this model is to use the cross-variables correlations, cross-sites correlations and the two steps time lag correlations simultaneously, for estimating the parameters of the model which then are modified using the mutation process of the genetic algorithm optimization model. The objective function that to be minimized is the Akiake test value. The case study is of four variables and three sites. The variables are the monthly air temperature, humidity, precipitation, and evaporation; the sites are Sulaimania, Chwarta, and Penjwin, which are located north Iraq. The model performance was
... Show MoreThis work implements the face recognition system based on two stages, the first stage is feature extraction stage and the second stage is the classification stage. The feature extraction stage consists of Self-Organizing Maps (SOM) in a hierarchical format in conjunction with Gabor Filters and local image sampling. Different types of SOM’s were used and a comparison between the results from these SOM’s was given.
The next stage is the classification stage, and consists of self-organizing map neural network; the goal of this stage is to find the similar image to the input image. The proposal method algorithm implemented by using C++ packages, this work is successful classifier for a face database consist of 20
... Show More