Deep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to evaluate the pronunciation of the Arabic alphabet. Voice data from six school children are recorded and used to test the performance of the proposed method. The padding technique has been used to augment the voice data before feeding the data to the CNN structure to developed the classification model. In addition, three other feature extraction techniques have been introduced to enable the comparison of the proposed method which employs padding technique. The performance of the proposed method with padding technique is at par with the spectrogram but better than mel-spectrogram and mel-frequency cepstral coefficients. Results also show that the proposed method was able to distinguish the Arabic alphabets that are difficult to pronounce. The proposed method with padding technique may be extended to address other voice pronunciation ability other than the Arabic alphabets.
The research aims to demonstrate the impact of TDABC as a strategic technology compatible with the rapid developments and changes in the contemporary business environment) on pricing decisions. As TDABC provides a new philosophy in the process of allocating indirect costs through time directives of resources and activities to the goal of cost, identifying unused energy and associated costs, which provides the management of economic units with financial and non-financial information that helps them in the complex and dangerous decision-making process. Of pricing decisions. To achieve better pricing decisions in light of the endeavor to maintain customers in a highly competitive environment and a variety of alternatives, the resear
... Show MoreThe present study stresses two of the most significant aspects of linguistic approach: Pragmatics” and the “Speech Act Theory”, revealing its importance and the stages and levels of development through Hebrew language’s speech acts analysis including (political speech, the Holy Bible, Hebrew stories).
Chronologically, Pragmatics has always been the center of linguists’ interests due to its importance in linguistic decryptions, particularly, through “Speech Act Theory” that has been initiated and developed by the most prominent philosophers and linguistics.
The prese
... Show MoreThe main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256) in our research, compressed them by using MLP for each
... Show MoreImage Fusion Using A Convolutional Neural Network
In this research Artificial Neural Network (ANN) technique was applied to study the filtration process in water treatment. Eight models have been developed and tested using data from a pilot filtration plant, working under different process design criteria; influent turbidity, bed depth, grain size, filtration rate and running time (length of the filtration run), recording effluent turbidity and head losses. The ANN models were constructed for the prediction of different performance criteria in the filtration process: effluent turbidity, head losses and running time. The results indicate that it is quite possible to use artificial neural networks in predicting effluent turbidity, head losses and running time in the filtration process, wi
... Show MoreIn this research, a low cost, portable, disposable, environment friendly and an easy to use lab-on-paper platform sensor was made. The sensor was constructed using a mixture of Rhodamine-6G and gold nanoparticles also Sodium chloride salt. Drop–casting method was utilized as a technique to make a platform which is a commercial office paper. A substrate was characterized using Field Emission Scanning Electron Microscope, Fourier transform infrared spectroscopy, UV-visible spectrophotometer and Raman Spectrometer. Rh-6G Raman signal was enhanced based on Surface Enhanced Raman Spectroscopy technique utilized gold nanoparticles. High Enhancement factor of Plasmonic commercial office paper reaches up to 0.9 x105 because of local surface pl
... Show MoreTitle: Arabic Manuscript, Concepts and Terms and Their Impact on Determining Its Historical beginnings and extension of its existence.
Researcher: Dr. Atallah Madb Hammadi Zubaie.
Bn the name of Allah Most Merciful
The interest in manuscripts and rules of their investigation and dissemination appeared soon, and the speech in editing terms and concepts appeared in sooner time. When looking at the classified books in the Arab manuscripts , we find the books of the first generation did not allude definition for this term , but rather focused on the importance of manuscripts and their existence locations, indexing, care, and verification rules. The reason for this is that the science of Arabic manuscrip
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for