The industrial factory is one of the challenging environments for future wireless communication systems, where the goal is to produce products with low cost in short time. This high level of network performance is achieved by distributing massive MIMO that provides indoor networks with joint beamforming that enhances 5G network capacity and user experience as well. Judging from the importance of this topic, this study introduces a new optimization problem concerning the investigation of multi-beam antenna (MBA) coverage possibilities in 5G network for indoor environments, named Base-station Beams Distribution Problem (BBDP). This problem has an extensive number of parameters and constrains including user’s location, required data rate and number of antenna elements. Thus, BBDP can be considered as NP-hard problem, where complexity increases exponentially as its dimension increases. Therefore, it requires a special computing method that can handle it in a reasonable amount of time. In this study, several differential evolution (DE) variants have been suggested to solve the BBDP problem. The results show that among all DE variants the self-adaptive DE (jDE) can find feasible solutions and outperform the classical ones in all BBDP scenarios with coverage rate of 85% and beam diameter of 500 m.
In this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent r
... Show MoreIn this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent root-
... Show MoreHepatitis is one of the diseases that has become more developed in recent years in terms of the high number of infections. Hepatitis causes inflammation that destroys liver cells, and it occurs as a result of viruses, bacteria, blood transfusions, and others. There are five types of hepatitis viruses, which are (A, B, C, D, E) according to their severity. The disease varies by type. Accurate and early diagnosis is the best way to prevent disease, as it allows infected people to take preventive steps so that they do not transmit the difference to other people, and diagnosis using artificial intelligence gives an accurate and rapid diagnostic result. Where the analytical method of the data relied on the radial basis network to diagnose the
... Show MoreThe main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256) in our research, compressed them by using MLP for each
... Show MoreThis paper concerns with deriving and estimating the reliability of the multicomponent system in stress-strength model R(s,k), when the stress and strength are identical independent distribution (iid), follows two parameters Exponentiated Pareto Distribution(EPD) with the unknown shape and known scale parameters. Shrinkage estimation method including Maximum likelihood estimator (MLE), has been considered. Comparisons among the proposed estimators were made depending on simulation based on mean squared error (MSE) criteria.
Essential approaches involving photons are among the most common uses of parallel optical computation due to their recent invention, ease of production, and low cost. As a result, most researchers have concentrated their efforts on it. The Basic Arithmetic Unit BAU is built using a three-step approach that uses optical gates with three states to configure the circuitry for addition, subtraction, and multiplication. This is a new optical computing method based on the usage of a radix of (2): a binary number with a signed-digit (BSD) system that includes the numbers -1, 0, and 1. Light with horizontal polarization (LHP) (↔), light with no intensity (LNI) (⥀), and light with vertical polarization (LVP) (↨) is represen
... Show MoreLoad balancing in computer networks is one of the most subjects that has got researcher's attention in the last decade. Load balancing will lead to reduce processing time and memory usage that are the most two concerns of the network companies in now days, and they are the most two factors that determine if the approach is worthy applicable or not. There are two kinds of load balancing, distributing jobs among other servers before processing starts and stays at that server to the end of the process is called static load balancing, and moving jobs during processing is called dynamic load balancing. In this research, two algorithms are designed and implemented, the History Usage (HU) algorithm that statically balances the load of a Loaded
... Show MoreIn this paper, estimation of system reliability of the multi-components in stress-strength model R(s,k) is considered, when the stress and strength are independent random variables and follows the Exponentiated Weibull Distribution (EWD) with known first shape parameter θ and, the second shape parameter α is unknown using different estimation methods. Comparisons among the proposed estimators through Monte Carlo simulation technique were made depend on mean squared error (MSE) criteria
Longitudinal data is becoming increasingly common, especially in the medical and economic fields, and various methods have been analyzed and developed to analyze this type of data.
In this research, the focus was on compiling and analyzing this data, as cluster analysis plays an important role in identifying and grouping co-expressed subfiles over time and employing them on the nonparametric smoothing cubic B-spline model, which is characterized by providing continuous first and second derivatives, resulting in a smoother curve with fewer abrupt changes in slope. It is also more flexible and can pick up on more complex patterns and fluctuations in the data.
The longitudinal balanced data profile was compiled into subgroup
... Show MoreIn this research, the methods of Kernel estimator (nonparametric density estimator) were relied upon in estimating the two-response logistic regression, where the comparison was used between the method of Nadaraya-Watson and the method of Local Scoring algorithm, and optimal Smoothing parameter λ was estimated by the methods of Cross-validation and generalized Cross-validation, bandwidth optimal λ has a clear effect in the estimation process. It also has a key role in smoothing the curve as it approaches the real curve, and the goal of using the Kernel estimator is to modify the observations so that we can obtain estimators with characteristics close to the properties of real parameters, and based on medical data for patients with chro
... Show More