Deep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to evaluate the pronunciation of the Arabic alphabet. Voice data from six school children are recorded and used to test the performance of the proposed method. The padding technique has been used to augment the voice data before feeding the data to the CNN structure to developed the classification model. In addition, three other feature extraction techniques have been introduced to enable the comparison of the proposed method which employs padding technique. The performance of the proposed method with padding technique is at par with the spectrogram but better than mel-spectrogram and mel-frequency cepstral coefficients. Results also show that the proposed method was able to distinguish the Arabic alphabets that are difficult to pronounce. The proposed method with padding technique may be extended to address other voice pronunciation ability other than the Arabic alphabets.
Background: Many thymoma classifications have been followed and have been updated by newer or alternative schemes. Many classifications were based on the morphology and histogenesis of normal thymus as the backbone, while other classifications have followed a more simplified scheme, whereby thymomas were grouped based on biological behavior. The WHO classification is currently the advocated one, which is based on “organotypical” features (i.e. histological characteristics mimicking those observed in the normal thymus) including cytoarchitecture (encapsulation and a “lobular architecture”) and the cellular composition, mostly the nuclear morphology is generally appreciated.
Objectives: Thi
... Show MoreA true random TTL pulse generator was implemented and investigated for quantum key distribution systems. The random TTL signals are generated by low cost components available in the local markets. The TTL signals are obtained by using true random binary sequences based on registering photon arrival time difference registered in coincidence windows between two single – photon detectors. The true random TTL pulse generator performance was tested by using time to digital converters which gives accurate readings for photon arrival time. The proposed true random pulse TTL generator can be used in any quantum -key distribution system for random operation of the transmitters for these systems
This paper proposed to build an authentication system between business partners on e-commerce application to prevent the frauds operations based on visual cryptography shares encapsulated by chen’s hyperchaotic key sequence. The proposed system consist of three phases, the first phase based on the color visual cryptography without complex computations, the second phase included generate sequence of DNA rules numbers and finally encapsulation phase is implemented based on use the unique initial value that generate in second phase as initial condition with Piecewise Linear Chaotic Maps to generate sequences of DNA rules numbers. The experimental results demonstrate the proposed able to overcome on cheating a
... Show MoreThe Non-Photorealistic Rendering (NPR) demands are increased with the development of electronic devices. This paper presents a new model for a cartooning system, as an essential category of the NPR. It is used the concept of vector quantization and Logarithmic Image Processing (LIP). An enhancement of Kekre Median Codebook Generation (KMCG) algorithm has been proposed and used by the system. Several metrics utilized to evaluate the time and quality of the system. The results showed that the proposed system reduced the time of cartoon production. Additionally, it enhanced the quality of several aspects like smoothing, color reduction, and brightness.
Hiding secret information in the image is a challenging and painstaking task in computer security and steganography system. Certainly, the absolute intricacy of attacks to security system makes it more attractive.in this research on steganography system involving information hiding,Huffman codding used to compress the secret code before embedding which provide high capacity and some security. Fibonacci decomposition used to represent the pixels in the cover image, which increase the robustness of the system. One byte used for mapping all the pixels properties. This makes the PSNR of the system higher due to random distribution of embedded bits. Finally, three kinds of evaluation are applied such as PSNR, chi-square attack, a
... Show MoreThe fact that the signature is widely used as a means of personal verification
emphasizes the need for an automatic verification system. Verification can be
performed either Offline or Online based on the application. Offline systems work on
the scanned image of a signature. In this paper an Offline Verification of handwritten
signatures which use set of simple shape based geometric features. The features used
are Mean, Occupancy Ratio, Normalized Area, Center of Gravity, Pixel density,
Standard Deviation and the Density Ratio. Before extracting the features,
preprocessing of a scanned image is necessary to isolate the signature part and to
remove any spurious noise present. Features Extracted for whole signature
Image content verification is to confirm the validity of the images, i.e. . To test if the image has experienced any alteration since it was made. Computerized watermarking has turned into a promising procedure for image content verification in light of its exceptional execution and capacity of altering identification.
In this study, a new scheme for image verification reliant on two dimensional chaotic maps and Discrete Wavelet Transform (DWT) is introduced. Arnold transforms is first applied to Host image (H) for scrambling as a pretreatment stage, then the scrambled host image is partitioned into sub-blocks of size 2×2 in which a 2D DWT is utilized on ea
... Show MoreTo achieve safe security to transfer data from the sender to receiver, cryptography is one way that is used for such purposes. However, to increase the level of data security, DNA as a new term was introduced to cryptography. The DNA can be easily used to store and transfer the data, and it becomes an effective procedure for such aims and used to implement the computation. A new cryptography system is proposed, consisting of two phases: the encryption phase and the decryption phase. The encryption phase includes six steps, starting by converting plaintext to their equivalent ASCII values and converting them to binary values. After that, the binary values are converted to DNA characters and then converted to their equivalent complementary DN
... Show MoreBiometrics is widely used with security systems nowadays; each biometric modality can be useful and has distinctive properties that provide uniqueness and ambiguity for security systems especially in communication and network technologies. This paper is about using biometric features of fingerprint, which is called (minutiae) to cipher a text message and ensure safe arrival of data at receiver end. The classical cryptosystems (Caesar, Vigenère, etc.) became obsolete methods for encryption because of the high-performance machines which focusing on repetition of the key in their attacks to break the cipher. Several Researchers of cryptography give efforts to modify and develop Vigenère cipher by enhancing its weaknesses.
... Show More<p>In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on
... Show More