major goal of the next-generation wireless communication systems is the development of a reliable high-speed wireless communication system that supports high user mobility. They must focus on increasing the link throughput and the network capacity. In this paper a novel, spectral efficient system is proposed for generating and transmitting twodimensional (2-D) orthogonal frequency division multiplexing (OFDM) symbols through 2- D inter-symbol interference (ISI) channel. Instead of conventional data mapping techniques, discrete finite Radon transform (FRAT) is used as a data mapping technique due to the increased orthogonality offered. As a result, the proposed structure gives a significant improvement in bit error rate (BER) performance. The new structure was tested and a comparison of performance for serial one-dimensional (1-D) Radon based OFDM and parallel 2-D Radon based OFDM is made under additive white Gaussian noise (AWGN), flat, and multi-path selective fading channels conditions. It is found that Radon based parallel 2-D OFDM has better speed and performance than serial 1-D Radon based OFDM.
The development of microcontroller is used in monitoring and data acquisition recently. This development has born various architectures for spreading and interfacing the microcontroller in network environment. Some of existing architecture suffers from redundant in resources, extra processing, high cost and delay in response. This paper presents flexible concise architecture for building distributed microcontroller networked system. The system consists of only one server, works through the internet, and a set of microcontrollers distributed in different sites. Each microcontroller is connected through the Ethernet to the internet. In this system the client requesting data from certain side is accomplished through just one server that is in
... Show MorePlagiarism is becoming more of a problem in academics. It’s made worse by the ease with which a wide range of resources can be found on the internet, as well as the ease with which they can be copied and pasted. It is academic theft since the perpetrator has ”taken” and presented the work of others as his or her own. Manual detection of plagiarism by a human being is difficult, imprecise, and time-consuming because it is difficult for anyone to compare their work to current data. Plagiarism is a big problem in higher education, and it can happen on any topic. Plagiarism detection has been studied in many scientific articles, and methods for recognition have been created utilizing the Plagiarism analysis, Authorship identification, and
... Show MoreSpeech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Kra
Gas-lift technique plays an important role in sustaining oil production, especially from a mature field when the reservoirs’ natural energy becomes insufficient. However, optimally allocation of the gas injection rate in a large field through its gas-lift network system towards maximization of oil production rate is a challenging task. The conventional gas-lift optimization problems may become inefficient and incapable of modelling the gas-lift optimization in a large network system with problems associated with multi-objective, multi-constrained, and limited gas injection rate. The key objective of this study is to assess the feasibility of utilizing the Genetic Algorithm (GA) technique to optimize t
This deals with estimation of Reliability function and one shape parameter (?) of two- parameters Burr – XII , when ?(shape parameter is known) (?=0.5,1,1.5) and also the initial values of (?=1), while different sample shze n= 10, 20, 30, 50) bare used. The results depend on empirical study through simulation experiments are applied to compare the four methods of estimation, as well as computing the reliability function . The results of Mean square error indicates that Jacknif estimator is better than other three estimators , for all sample size and parameter values
The aim of this book is to present a method for solving high order ordinary differential equations with two point boundary condition of the different kind, we propose semi-analytic technique using two-point osculatory interpolation to construct polynomial solution. The original problem is concerned using two-points osculatory interpolation with the fit equal numbers of derivatives at the end points of an interval [0 , 1] . Also, we discussion the existence and uniqueness of solutions and many examples are presented to demonstrate the applicability, accuracy and efficiency of the methods by compared with conventional method .i.e. VIDM , Septic B-Spline , , NIM , HPM, Haar wavelets on one hand and to confirm the order convergence on the other
... Show MoreThe present work provides to treat real oily saline wastewater released from drilling oil sites by the use of electrocoagulation technique. Aluminum tubes were utilized as electrodes in a concentric manner to minimize the concentrations of 113400 mg TDS/L, 65623 mg TSS/L, and the ions of 477 mg HCO3/L, 102000 mg Cl/L and 5600 mg Ca/L presented in real oily wastewater under the effect of the operational parameters (the applied current and reaction time) by making use of the central composite rotatable design. The final concentrations of TDS, TSS, HCO3, Cl, and Ca that obtained were 93555 ppm (17.50%), 11011 ppm (83.22%), 189ppm (60.38%), 80000ppm (22%), and 4200 ppm (25%), respectively, under the optimum values of the operational parameters
... Show MoreGeneral Background: Deep image matting is a fundamental task in computer vision, enabling precise foreground extraction from complex backgrounds, with applications in augmented reality, computer graphics, and video processing. Specific Background: Despite advancements in deep learning-based methods, preserving fine details such as hair and transparency remains a challenge. Knowledge Gap: Existing approaches struggle with accuracy and efficiency, necessitating novel techniques to enhance matting precision. Aims: This study integrates deep learning with fusion techniques to improve alpha matte estimation, proposing a lightweight U-Net model incorporating color-space fusion and preprocessing. Results: Experiments using the AdobeComposition-1k
... Show More