The influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, which is used to cluster genes. FCM allows an object to belong to two or more clusters with a membership grade between zero and one and the sum of belonging to all clusters of each gene is equal to one. This paradigm is useful when dealing with microarray data. The total time required to implement the first model is 22.2589 s. The second model combines FCM and particle swarm optimization (PSO) to obtain better results. The hybrid algorithm, i.e., FCM–PSO, uses the DB index as objective function. The experimental results show that the proposed hybrid FCM–PSO method is effective. The total time of implementation of this model is 89.6087 s. The third model combines FCM with a genetic algorithm (GA) to obtain better results. This hybrid algorithm also uses the DB index as objective function. The experimental results show that the proposed hybrid FCM–GA method is effective. Its total time of implementation is 50.8021 s. In addition, this study uses cluster validity indexes to determine the best partitioning for the underlying data. Internal validity indexes include the Jaccard, Davies Bouldin, Dunn, Xie–Beni, and silhouette. Meanwhile, external validity indexes include Minkowski, adjusted Rand, and percentage of correctly categorized pairings. Experiments conducted on brain tumor gene expression data demonstrate that the techniques used in this study outperform traditional models in terms of stability and biological significance.
Nowadays, people's expression on the Internet is no longer limited to text, especially with the rise of the short video boom, leading to the emergence of a large number of modal data such as text, pictures, audio, and video. Compared to single mode data ,the multi-modal data always contains massive information. The mining process of multi-modal information can help computers to better understand human emotional characteristics. However, because the multi-modal data show obvious dynamic time series features, it is necessary to solve the dynamic correlation problem within a single mode and between different modes in the same application scene during the fusion process. To solve this problem, in this paper, a feature extraction framework of
... Show MoreHigh frequency (HF) communications have an important role in long distances wireless communications. This frequency band is more important than VHF and UHF, as HF frequencies can cut longer distance with a single hopping. It has a low operation cost because it offers over-the-horizon communications without repeaters, therefore it can be used as a backup for satellite communications in emergency conditions. One of the main problems in HF communications is the prediction of the propagation direction and the frequency of optimum transmission (FOT) that must be used at a certain time. This paper introduces a new technique based on Oblique Ionosonde Station (OIS) to overcome this problem with a low cost and an easier way. This technique uses the
... Show MoreAccurate emotion categorization is an important and challenging task in computer vision and image processing fields. Facial emotion recognition system implies three important stages: Prep-processing and face area allocation, feature extraction and classification. In this study a new system based on geometric features (distances and angles) set derived from the basic facial components such as eyes, eyebrows and mouth using analytical geometry calculations. For classification stage feed forward neural network classifier is used. For evaluation purpose the Standard database "JAFFE" have been used as test material; it holds face samples for seven basic emotions. The results of conducted tests indicate that the use of suggested distances, angles
... Show MoreInformation security is a crucial factor when communicating sensitive information between two parties. Steganography is one of the most techniques used for this purpose. This paper aims to enhance the capacity and robustness of hiding information by compressing image data to a small size while maintaining high quality so that the secret information remains invisible and only the sender and recipient can recognize the transmission. Three techniques are employed to conceal color and gray images, the Wavelet Color Process Technique (WCPT), Wavelet Gray Process Technique (WGPT), and Hybrid Gray Process Technique (HGPT). A comparison between the first and second techniques according to quality metrics, Root-Mean-Square Error (RMSE), Compression-
... Show MorePassword authentication is popular approach to the system security and it is also very important system security procedure to gain access to resources of the user. This paper description password authentication method by using Modify Bidirectional Associative Memory (MBAM) algorithm for both graphical and textual password for more efficient in speed and accuracy. Among 100 test the accuracy result is 100% for graphical and textual password to authenticate a user.
In this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
In this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).