In this work a model of a source generating truly random quadrature phase shift keying (QPSK) signal constellation required for quantum key distribution (QKD) system based on BB84 protocol using phase coding is implemented by using the software package OPTISYSTEM9. The randomness of the sequence generated is achieved by building an optical setup based on a weak laser source, beam splitters and single-photon avalanche photodiodes operating in Geiger mode. The random string obtained from the optical setup is used to generate the quadrature phase shift keying signal constellation required for phase coding in quantum key distribution system based on BB84 protocol with a bit rate of 2GHz/s.
A comparison of double informative and non- informative priors assumed for the parameter of Rayleigh distribution is considered. Three different sets of double priors are included, for a single unknown parameter of Rayleigh distribution. We have assumed three double priors: the square root inverted gamma (SRIG) - the natural conjugate family of priors distribution, the square root inverted gamma – the non-informative distribution, and the natural conjugate family of priors - the non-informative distribution as double priors .The data is generating form three cases from Rayleigh distribution for different samples sizes (small, medium, and large). And Bayes estimators for the parameter is derived under a squared erro
... Show More
In this work, a test room was built in Baghdad city, with (2*1.5*1.5) m3 in dimensions, while the solar chimneys (SC) were designed with aspect ratio (ar) bigger than 12. Test room was supplied by many solar collectors; vertical single side of air pass with ar equals 25, and tilted 45o double side of air passes with ar equals 50 for each pass, both collectors consist of flat thermal energy storage box collector (TESB) that covered by transparent clear acrylic sheet, third type of collector is array of evacuated tubular collectors with thermosyphon in 45o instelled in the bottom of TESB of vertical SC. The TESB was
... Show MoreIn this study, we focused on the random coefficient estimation of the general regression and Swamy models of panel data. By using this type of data, the data give a better chance of obtaining a better method and better indicators. Entropy's methods have been used to estimate random coefficients for the general regression and Swamy of the panel data which were presented in two ways: the first represents the maximum dual Entropy and the second is general maximum Entropy in which a comparison between them have been done by using simulation to choose the optimal methods.
The results have been compared by using mean squares error and mean absolute percentage error to different cases in term of correlation valu
... Show MoreIn this research, the covariance estimates were used to estimate the population mean in the stratified random sampling and combined regression estimates. were compared by employing the robust variance-covariance matrices estimates with combined regression estimates by employing the traditional variance-covariance matrices estimates when estimating the regression parameter, through the two efficiency criteria (RE) and mean squared error (MSE). We found that robust estimates significantly improved the quality of combined regression estimates by reducing the effect of outliers using robust covariance and covariance matrices estimates (MCD, MVE) when estimating the regression parameter. In addition, the results of the simulation study proved
... Show MoreThis Research deals with estimation the reliability function for two-parameters Exponential distribution, using different estimation methods ; Maximum likelihood, Median-First Order Statistics, Ridge Regression, Modified Thompson-Type Shrinkage and Single Stage Shrinkage methods. Comparisons among the estimators were made using Monte Carlo Simulation based on statistical indicter mean squared error (MSE) conclude that the shrinkage method perform better than the other methods
This paper deals with defining Burr-XII, and how to obtain its p.d.f., and CDF, since this distribution is one of failure distribution which is compound distribution from two failure models which are Gamma model and weibull model. Some equipment may have many important parts and the probability distributions representing which may be of different types, so found that Burr by its different compound formulas is the best model to be studied, and estimated its parameter to compute the mean time to failure rate. Here Burr-XII rather than other models is consider because it is used to model a wide variety of phenomena including crop prices, household income, option market price distributions, risk and travel time. It has two shape-parame
... Show MoreBiomedical signal such as ECG is extremely important in the diagnosis of patients and is commonly recorded with a noise. Many different kinds of noise exist in biomedical environment such as Power Line Interference Noise (PLIN). Adaptive filtering is selected to contend with these defects, the adaptive filters can adjust the filter coefficient with the given filter order. The objectives of this paper are: first an application of the Least Mean Square (LMS) algorithm, Second is an application of the Recursive Least Square (RLS) algorithm to remove the PLIN. The LMS and RLS algorithms of the adaptive filter were proposed to adapt the filter order and the filter coefficients simultaneously, the performance of existing LMS
... Show MoreThis paper is focused on studying the effect of cutting parameters (spindle speed, feed and depth of cut) on the response (temperature and tool life) during turning process. The inserts used in this study are carbide inserts coated with TiAlN (Titanum, Aluminium and Nitride) for machining a shaft of stainless steel 316L. Finite difference method was used to find the temperature distribution. The experimental results were done using infrared camera while the simulation process was performed using Matlab software package. The results showed that the maximum difference between the experimental and simulation results was equal to 19.3 , so, a good agreement between the experimental and simulation results was achieved. Tool life w
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show More