Speech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Krawtchouk-Tchebichef transform (DKTT) has a high energy compaction and provides a high matching between Laplacian density and its coefficients distribution that affects positively on reducing residual noise without sacrificing speech components. Moreover, a cascade combination of hybrid speech estimator is proposed by using two stages filters (non-linear and linear) based on DKTT domain to lessen the residual noise effectively without distorting the speech signal. The linear estimator is considered as a post processing filter that reinforces the suppression of noise by regenerate speech components. To this end, the output results have been compared with existing work in terms of different quality and intelligibility measures. The comparative evaluation confirms the superior achievements of the proposed SEA in various noisy environments. The improvement ratio of the presented algorithm in terms of PESQ measure are 5.8% and 1.8% for white and babble noise environments, respectively. In addition, the improvement ratio of the presented algorithm in terms of OVL measure are 15.7% and 9.8% for white and babble noise environments, respectively.
Steganography is defined as hiding confidential information in some other chosen media without leaving any clear evidence of changing the media's features. Most traditional hiding methods hide the message directly in the covered media like (text, image, audio, and video). Some hiding techniques leave a negative effect on the cover image, so sometimes the change in the carrier medium can be detected by human and machine. The purpose of suggesting hiding information is to make this change undetectable. The current research focuses on using complex method to prevent the detection of hiding information by human and machine based on spiral search method, the Structural Similarity Index Metrics measures are used to get the accuracy and quality
... Show MoreComputer-aided diagnosis (CAD) has proved to be an effective and accurate method for diagnostic prediction over the years. This article focuses on the development of an automated CAD system with the intent to perform diagnosis as accurately as possible. Deep learning methods have been able to produce impressive results on medical image datasets. This study employs deep learning methods in conjunction with meta-heuristic algorithms and supervised machine-learning algorithms to perform an accurate diagnosis. Pre-trained convolutional neural networks (CNNs) or auto-encoder are used for feature extraction, whereas feature selection is performed using an ant colony optimization (ACO) algorithm. Ant colony optimization helps to search for the bes
... Show MoreAttacking a transferred data over a network is frequently happened millions time a day. To address this problem, a secure scheme is proposed which is securing a transferred data over a network. The proposed scheme uses two techniques to guarantee a secure transferring for a message. The message is encrypted as a first step, and then it is hided in a video cover. The proposed encrypting technique is RC4 stream cipher algorithm in order to increase the message's confidentiality, as well as improving the least significant bit embedding algorithm (LSB) by adding an additional layer of security. The improvement of the LSB method comes by replacing the adopted sequential selection by a random selection manner of the frames and the pixels wit
... Show MoreIn the present study benzofuran based chalcones 1 (a, b) are synthesized by condensing aromatic aldehydes with 2-acetylbenzofuran in the presence suitable base. These chalcones are very useful precursors for the synthesis of pyrazoline, isoxazoline, pyrmidine, cyclohexenone and indazole derivatives. All these compounds are characterized by their melting points, FTIR and 1 HMNR (for some of them) spectral dat
This paper demonstrates the design of an algorithm to represent the design stages of fixturing system that serve in increasing the flexibility and automation of fixturing system planning for uniform polyhedral part. This system requires building a manufacturing feature recognition algorithm to present or describe inputs such as (configuration of workpiece) and built database system to represents (production plan and fixturing system exiting) to this algorithm. Also knowledge – base system was building or developed to find the best fixturing analysis (workpiece setup, constraints of workpiece and arrangement the contact on this workpiece) to workpiece.
This paper demonstrates the spatial response uniformity (SRU) of two types of heterojunctions (CdS, PbS /Si) laser detectors. The spatial response nonuniformity of these heterojunctions is not significant and it is negligible in comparison with p+- n silicon photodiode. Experimental results show that the uniformity of CdS /Si is better than that of PbS /Si heterojunction
The influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show More