Recommendation systems are now being used to address the problem of excess information in several sectors such as entertainment, social networking, and e-commerce. Although conventional methods to recommendation systems have achieved significant success in providing item suggestions, they still face many challenges, including the cold start problem and data sparsity. Numerous recommendation models have been created in order to address these difficulties. Nevertheless, including user or item-specific information has the potential to enhance the performance of recommendations. The ConvFM model is a novel convolutional neural network architecture that combines the capabilities of deep learning for feature extraction with the effectiveness of factorization machines for recommendation tasks. The present work introduces a novel hybrid deep factorization machine (FM) model, referred to as ConvFM. The ConvFM model use a combination of feature extraction and convolutional neural networks (CNNs) to extract features from both individuals and things, namely movies. Following this, the proposed model employs a methodology known as factorization machines, which use the FM algorithm. The focus of the CNN is on the extraction of features, which has resulted in a notable improvement in performance. In order to enhance the accuracy of predictions and address the challenges posed by sparsity, the proposed model incorporates both the extracted attributes and explicit interactions between items and users. This paper presents the experimental procedures and outcomes conducted on the Movie Lens dataset. In this discussion, we engage in an analysis of our research outcomes followed by provide recommendations for further action.
This Research deals with estimation the reliability function for two-parameters Exponential distribution, using different estimation methods ; Maximum likelihood, Median-First Order Statistics, Ridge Regression, Modified Thompson-Type Shrinkage and Single Stage Shrinkage methods. Comparisons among the estimators were made using Monte Carlo Simulation based on statistical indicter mean squared error (MSE) conclude that the shrinkage method perform better than the other methods
The present study aims to investigate the various request constructions used in Classical Arabic and Modern Arabic language by identifying the differences in their usage in these two different genres. Also, the study attempts to trace the cases of felicitous and infelicitous requests in the Arabic language. Methodologically, the current study employs a web-based corpus tool (Sketch Engine) to analyze different corpora: the first one is Classical Arabic, represented by King Saud University Corpus of Classical Arabic, while the second is The Arabic Web Corpus “arTenTen” representing Modern Arabic. To do so, the study relies on felicity conditions to qualitatively interpret the quantitative data, i.e., following a mixed mode method
... Show MoreThis article proposes a new strategy based on a hybrid method that combines the gravitational search algorithm (GSA) with the bat algorithm (BAT) to solve a single-objective optimization problem. It first runs GSA, followed by BAT as the second step. The proposed approach relies on a parameter between 0 and 1 to address the problem of falling into local research because the lack of a local search mechanism increases intensity search, whereas diversity remains high and easily falls into the local optimum. The improvement is equivalent to the speed of the original BAT. Access speed is increased for the best solution. All solutions in the population are updated before the end of the operation of the proposed algorithm. The diversification f
... Show MoreHigh vehicular mobility causes frequent changes in the density of vehicles, discontinuity in inter-vehicle communication, and constraints for routing protocols in vehicular ad hoc networks (VANETs). The routing must avoid forwarding packets through segments with low network density and high scale of network disconnections that may result in packet loss, delays, and increased communication overhead in route recovery. Therefore, both traffic and segment status must be considered. This paper presents real-time intersection-based segment aware routing (RTISAR), an intersection-based segment aware algorithm for geographic routing in VANETs. This routing algorithm provides an optimal route for forwarding the data packets toward their destination
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show More