Recommendation systems are now being used to address the problem of excess information in several sectors such as entertainment, social networking, and e-commerce. Although conventional methods to recommendation systems have achieved significant success in providing item suggestions, they still face many challenges, including the cold start problem and data sparsity. Numerous recommendation models have been created in order to address these difficulties. Nevertheless, including user or item-specific information has the potential to enhance the performance of recommendations. The ConvFM model is a novel convolutional neural network architecture that combines the capabilities of deep learning for feature extraction with the effectiveness of factorization machines for recommendation tasks. The present work introduces a novel hybrid deep factorization machine (FM) model, referred to as ConvFM. The ConvFM model use a combination of feature extraction and convolutional neural networks (CNNs) to extract features from both individuals and things, namely movies. Following this, the proposed model employs a methodology known as factorization machines, which use the FM algorithm. The focus of the CNN is on the extraction of features, which has resulted in a notable improvement in performance. In order to enhance the accuracy of predictions and address the challenges posed by sparsity, the proposed model incorporates both the extracted attributes and explicit interactions between items and users. This paper presents the experimental procedures and outcomes conducted on the Movie Lens dataset. In this discussion, we engage in an analysis of our research outcomes followed by provide recommendations for further action.
In this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
A series of liquid crystals comprising a heterocyclics dihydro pyrrole and 1,2,3-triazole rings [VII]-[X] were synthesized by many steps starting from a reaction of 3,3'-dimethyl-[1,1'-biphenyl]- 4,4'-diamine with chloroacetyl chloride in a mixture of solutions DMF and TEA to synthesise the compounds [I], then the compounds [I] reacted with malononitrile in 1,4-dioxane and TEA solutions to produce compounds [II], then the first step is repeated with compound [II] where it reacted with chloroacetyl chloride in mixture of DMF and TEA to give compound [III], this compound reacted with sodium azide in the presence of sodium chloride and DMF as solvent to produce the compound [IV], which reacted with acrylic acid by a 1.3 dipolar reaction in sol
... Show MoreIn this paper, wireless network is planned; the network is predicated on the IEEE 802.16e standardization by WIMAX. The targets of this paper are coverage maximizing, service and low operational fees. WIMAX is planning through three approaches. In approach one; the WIMAX network coverage is major for extension of cell coverage, the best sites (with Band Width (BW) of 5MHz, 20MHZ per sector and four sectors per each cell). In approach two, Interference analysis in CNIR mode. In approach three of the planning, Quality of Services (QoS) is tested and evaluated. ATDI ICS software (Interference Cancellation System) using to perform styling. it shows results in planning area covered 90.49% of the Baghdad City and used 1000 mob
... Show MoreImage pattern classification is considered a significant step for image and video processing.Although various image pattern algorithms have been proposed so far that achieved adequate classification,achieving higher accuracy while reducing the computation time remains challenging to date. A robust imagepattern classification method is essential to obtain the desired accuracy. This method can be accuratelyclassify image blocks into plain, edge, and texture (PET) using an efficient feature extraction mechanism.Moreover, to date, most of the existing studies are focused on evaluating their methods based on specificorthogonal moments, which limits the understanding of their potential application to various DiscreteOrthogonal Moments (DOMs). The
... Show MoreIn this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
The present study aims to identify the most and the least common teaching practices among faculty members in Northern Border University according to brain-based learning theory, as well as to identify the effect of sex, qualifications, faculty type, and years of experiences in teaching practices. The study sample consisted of (199) participants divided into 100 males and 99 females. The study results revealed that the most teaching practice among the study sample was ‘I am trying to create an Environment of encouragement and support within the classroom which found to be (4.4623). As for the least teaching practice was ‘I use a natural musical sounds to create student's mood to learn’ found to be (2.2965). The study results also in
... Show More