Deep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to evaluate the pronunciation of the Arabic alphabet. Voice data from six school children are recorded and used to test the performance of the proposed method. The padding technique has been used to augment the voice data before feeding the data to the CNN structure to developed the classification model. In addition, three other feature extraction techniques have been introduced to enable the comparison of the proposed method which employs padding technique. The performance of the proposed method with padding technique is at par with the spectrogram but better than mel-spectrogram and mel-frequency cepstral coefficients. Results also show that the proposed method was able to distinguish the Arabic alphabets that are difficult to pronounce. The proposed method with padding technique may be extended to address other voice pronunciation ability other than the Arabic alphabets.
This work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreThis paper aims to propose a hybrid approach of two powerful methods, namely the differential transform and finite difference methods, to obtain the solution of the coupled Whitham-Broer-Kaup-Like equations which arises in shallow-water wave theory. The capability of the method to such problems is verified by taking different parameters and initial conditions. The numerical simulations are depicted in 2D and 3D graphs. It is shown that the used approach returns accurate solutions for this type of problems in comparison with the analytic ones.
Monaural source separation is a challenging issue due to the fact that there is only a single channel available; however, there is an unlimited range of possible solutions. In this paper, a monaural source separation model based hybrid deep learning model, which consists of convolution neural network (CNN), dense neural network (DNN) and recurrent neural network (RNN), will be presented. A trial and error method will be used to optimize the number of layers in the proposed model. Moreover, the effects of the learning rate, optimization algorithms, and the number of epochs on the separation performance will be explored. Our model was evaluated using the MIR-1K dataset for singing voice separation. Moreover, the proposed approach achi
... Show MoreAbstract
Zigbee is considered to be one of the wireless sensor networks (WSNs) designed for short-range communications applications. It follows IEEE 802.15.4 specifications that aim to design networks with lowest cost and power consuming in addition to the minimum possible data rate. In this paper, a transmitter Zigbee system is designed based on PHY layer specifications of this standard. The modulation technique applied in this design is the offset quadrature phase shift keying (OQPSK) with half sine pulse-shaping for achieving a minimum possible amount of phase transitions. In addition, the applied spreading technique is direct sequence spread spectrum (DSSS) technique, which has
... Show MoreIn this research, a low cost, portable, disposable, environment friendly and an easy to use lab-on-paper platform sensor was made. The sensor was constructed using a mixture of Rhodamine-6G and gold nanoparticles also Sodium chloride salt. Drop–casting method was utilized as a technique to make a platform which is a commercial office paper. A substrate was characterized using Field Emission Scanning Electron Microscope, Fourier transform infrared spectroscopy, UV-visible spectrophotometer and Raman Spectrometer. Rh-6G Raman signal was enhanced based on Surface Enhanced Raman Spectroscopy technique utilized gold nanoparticles. High Enhancement factor of Plasmonic commercial office paper reaches up to 0.9 x105 because of local surface pl
... Show MoreLung cancer is the most common dangerous disease that, if treated late, can lead to death. It is more likely to be treated if successfully discovered at an early stage before it worsens. Distinguishing the size, shape, and location of lymphatic nodes can identify the spread of the disease around these nodes. Thus, identifying lung cancer at the early stage is remarkably helpful for doctors. Lung cancer can be diagnosed successfully by expert doctors; however, their limited experience may lead to misdiagnosis and cause medical issues in patients. In the line of computer-assisted systems, many methods and strategies can be used to predict the cancer malignancy level that plays a significant role to provide precise abnormality detectio
... Show MoreThe research aims to demonstrate the impact of TDABC as a strategic technology compatible with the rapid developments and changes in the contemporary business environment) on pricing decisions. As TDABC provides a new philosophy in the process of allocating indirect costs through time directives of resources and activities to the goal of cost, identifying unused energy and associated costs, which provides the management of economic units with financial and non-financial information that helps them in the complex and dangerous decision-making process. Of pricing decisions. To achieve better pricing decisions in light of the endeavor to maintain customers in a highly competitive environment and a variety of alternatives, the resear
... Show Moreفي هذا البحث نحاول تسليط الضوء على إحدى طرائق تقدير المعلمات الهيكلية لنماذج المعادلات الآنية الخطية والتي تزودنا بتقديرات متسقة تختلف أحيانا عن تلك التي نحصل عليها من أساليب الطرائق التقليدية الأخرى وفق الصيغة العامة لمقدرات K-CLASS. وهذه الطريقة تعرف بطريقة الإمكان الأعظم محدودة المعلومات "LIML" أو طريقة نسبة التباين الصغرى"LVR
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for