This paper investigates the capacitated vehicle routing problem (CVRP) as it is one of the numerous issues that have no impeccable solutions yet. Numerous scientists in the recent couple of decades have set up various explores and utilized numerous strategies with various methods to deal with it. However, for all researches, finding the least cost is exceptionally complicated. In any case, they have figured out how to think of rough solutions that vary in efficiencies relying upon the search space. Furthermore, tabu search (TS) is utilized to resolve this issue as it is fit for solving numerous complicated issues. The algorithm has been adjusted to resolve the exploration issue, where its methodology is not quite the same as the normal algorithm. The structure of the algorithm is planned with the goal that the program does not require a substantial database to store the data, which accelerates the usage of the program execution to acquire the solution. The algorithm has demonstrated its accomplishment in resolving the issue and finds a most limited route.
Stream ciphers are an important class of encryption algorithms. There is a vast body of theoretical knowledge on stream ciphers, and various design principles for stream ciphers have been proposed and extensively analyzed. This paper presents a new method of stream cipher, that by segmenting the plaintext into number of register then any of them combined to any other by using combination logic circuit (And, OR, JK, NOT, XOR), then using variant register in length as a key which provides security enhancement against attacks and then compare the strength of this method with RSA by calculaing the time necessary to get the original text by using the genetic algorithm. And the way that ha
... Show MoreWater/oil emulsion is considered as the most refractory mixture to separate because of the interference of the two immiscible liquids, water and oil. This research presents a study of dewatering of water / kerosene emulsion using hydrocyclone. The effects of factors such as: feed flow rate (3, 5, 7, 9, and 11 L/min), inlet water concentration of the emulsion (5%, 7.5%, 10%, 12.5%, and 15% by volume), and split ratio (0.1, 0.3, 0.5, 0.7, and 0.9) on the separation efficiency and pressure drop were studied. Dimensional analysis using Pi theorem was applied for the first time to model the hydrocyclone based on the experimental data. It was shown that the maximum separation efficiency; at split ratio 0.1, was 94.3% at 10% co
... Show MoreAs we live in the era of the fourth technological revolution, it has become necessary to use artificial intelligence to generate electric power through sustainable solar energy, especially in Iraq and what it has gone through in terms of crises and what it suffers from a severe shortage of electric power because of the wars and calamities it went through. During that period of time, its impact is still evident in all aspects of daily life experienced by Iraqis because of the remnants of wars, siege, terrorism, wrong policies ruling before and later, regional interventions and their consequences, such as the destruction of electric power stations and the population increase, which must be followed by an increase in electric power stations,
... Show MoreShadow detection and removal is an important task when dealing with color outdoor images. Shadows are generated by a local and relative absence of light. Shadows are, first of all, a local decrease in the amount of light that reaches a surface. Secondly, they are a local change in the amount of light rejected by a surface toward the observer. Most shadow detection and segmentation methods are based on image analysis. However, some factors will affect the detection result due to the complexity of the circumstances. In this paper a method of segmentation test present to detect shadows from an image and a function concept is used to remove the shadow from an image.
Neural cryptography deals with the problem of “key exchange” between two neural networks by using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between two communicating parties ar eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process.
Compressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension) on speech compression. DWT and MCT performances in terms of comp
... Show MoreEye Detection is used in many applications like pattern recognition, biometric, surveillance system and many other systems. In this paper, a new method is presented to detect and extract the overall shape of one eye from image depending on two principles Helmholtz & Gestalt. According to the principle of perception by Helmholz, any observed geometric shape is perceptually "meaningful" if its repetition number is very small in image with random distribution. To achieve this goal, Gestalt Principle states that humans see things either through grouping its similar elements or recognize patterns. In general, according to Gestalt Principle, humans see things through genera
... Show MoreDiabetes is one of the increasing chronic diseases, affecting millions of people around the earth. Diabetes diagnosis, its prediction, proper cure, and management are compulsory. Machine learning-based prediction techniques for diabetes data analysis can help in the early detection and prediction of the disease and its consequences such as hypo/hyperglycemia. In this paper, we explored the diabetes dataset collected from the medical records of one thousand Iraqi patients. We applied three classifiers, the multilayer perceptron, the KNN and the Random Forest. We involved two experiments: the first experiment used all 12 features of the dataset. The Random Forest outperforms others with 98.8% accuracy. The second experiment used only five att
... Show MoreThe penalized least square method is a popular method to deal with high dimensional data ,where the number of explanatory variables is large than the sample size . The properties of penalized least square method are given high prediction accuracy and making estimation and variables selection
At once. The penalized least square method gives a sparse model ,that meaning a model with small variables so that can be interpreted easily .The penalized least square is not robust ,that means very sensitive to the presence of outlying observation , to deal with this problem, we can used a robust loss function to get the robust penalized least square method ,and get robust penalized estimator and
... Show MoreSteganography is defined as hiding confidential information in some other chosen media without leaving any clear evidence of changing the media's features. Most traditional hiding methods hide the message directly in the covered media like (text, image, audio, and video). Some hiding techniques leave a negative effect on the cover image, so sometimes the change in the carrier medium can be detected by human and machine. The purpose of suggesting hiding information is to make this change undetectable. The current research focuses on using complex method to prevent the detection of hiding information by human and machine based on spiral search method, the Structural Similarity Index Metrics measures are used to get the accuracy and quality
... Show More