Recommendation systems are now being used to address the problem of excess information in several sectors such as entertainment, social networking, and e-commerce. Although conventional methods to recommendation systems have achieved significant success in providing item suggestions, they still face many challenges, including the cold start problem and data sparsity. Numerous recommendation models have been created in order to address these difficulties. Nevertheless, including user or item-specific information has the potential to enhance the performance of recommendations. The ConvFM model is a novel convolutional neural network architecture that combines the capabilities of deep learning for feature extraction with the effectiveness of factorization machines for recommendation tasks. The present work introduces a novel hybrid deep factorization machine (FM) model, referred to as ConvFM. The ConvFM model use a combination of feature extraction and convolutional neural networks (CNNs) to extract features from both individuals and things, namely movies. Following this, the proposed model employs a methodology known as factorization machines, which use the FM algorithm. The focus of the CNN is on the extraction of features, which has resulted in a notable improvement in performance. In order to enhance the accuracy of predictions and address the challenges posed by sparsity, the proposed model incorporates both the extracted attributes and explicit interactions between items and users. This paper presents the experimental procedures and outcomes conducted on the Movie Lens dataset. In this discussion, we engage in an analysis of our research outcomes followed by provide recommendations for further action.
Photonic Crystal Fiber Interferometers (PCFIs) are widely used for sensing applications. This work presents the fabrication and the characterization of a relative humidity sensor based on a polymer-coated photonic crystal fiber that operates in a Mach- Zehnder Interferometer (MZI) transmission mode. The fabrication of the sensor involved splicing a short (1 cm) length of Photonic Crystal Fiber (PCF) between two single-mode fibers (SMF). It was then coated with a layer of agarose solution. Experimental results showed that a high humidity sensitivity of 29.37 pm/%RH was achieved within a measurement range of 27–95%RH. The sensor also showed good repeatability, small size, measurement accuracy and wide humidity range. The RH sensitivity o
... Show MoreA particle swarm optimization algorithm and neural network like self-tuning PID controller for CSTR system is presented. The scheme of the discrete-time PID control structure is based on neural network and tuned the parameters of the PID controller by using a particle swarm optimization PSO technique as a simple and fast training algorithm. The proposed method has advantage that it is not necessary to use a combined structure of identification and decision because it used PSO. Simulation results show the effectiveness of the proposed adaptive PID neural control algorithm in terms of minimum tracking error and smoothness control signal obtained for non-linear dynamical CSTR system.
In this paper, an intelligent tracking control system of both single- and double-axis Piezoelectric Micropositioner stage is designed using Genetic Algorithms (GAs) method for the optimal Proportional-Integral-Derivative (PID) controller tuning parameters. The (GA)-based PID control design approach is a methodology to tune a (PID) controller in an optimal control sense with respect to specified objective function. By using the (GA)-based PID control approach, the high-performance trajectory tracking responses of the Piezoelectric Micropositioner stage can be obtained. The (GA) code was built and the simulation results were obtained using MATLAB environment. The Piezoelectric Micropositioner simulation model with th
... Show MoreAbstract: Polarization beam splitter (PBS) integrated waveguides are the key components in the receiver of quantum key distribution (QKD) systems. Their function is to analyze the polarization of polarized light and separate the transverse-electric (TE) and transverse-magnetic (TM) polarizations into different waveguides. In this paper, a performance study of polarization beam splitters based on horizontal slot waveguide has been investigated for a wavelength of . PBS based on horizontal slot waveguide structure shows a polarization extinction ratio for quasi-TE and quasi-TM modes larger than with insertion loss below and a bandwidth of . Also, the fabrication tolerance of the structure is analyzed.<
... Show MoreIn this work a model of a source generating truly random quadrature phase shift keying (QPSK) signal constellation required for quantum key distribution (QKD) system based on BB84 protocol using phase coding is implemented by using the software package OPTISYSTEM9. The randomness of the sequence generated is achieved by building an optical setup based on a weak laser source, beam splitters and single-photon avalanche photodiodes operating in Geiger mode. The random string obtained from the optical setup is used to generate the quadrature phase shift keying signal constellation required for phase coding in quantum key distribution system based on BB84 protocol with a bit rate of 2GHz/s.
The sensitivity of SnO2 nanoparticles/reduced graphene oxide hybrid to NO2 gas is discussed in the present work using density functional theory (DFT). The SnO2 nanoparticles shapes are taken as pyramids, as proved by experiments. The reduced graphene oxide (rGO) edges have oxygen or oxygen-containing functional groups. However, the upper and lower surfaces of rGO are clean, as expected from the oxide reduction procedure. Results show that SnO2 particles are connected at the edges of rGO, making a p-n heterojunction with a reduced agglomeration of SnO2 particles and high gas sensitivity. The DFT results are in
The research is dealing with the absorption and fluorescence spectra for the hybrid of an Epoxy Resin doped with organic dye Rhodamine (R6G) of different concentrations (5*10-6, 5*10-5, 1*10-5, 1*10-4, 5*10-4) Mol/ℓ at room temperature. The Quantum efficiency Qfm, the rate of fluorescence emission Kfm (s-1), the non-radiative lifetime τfm (s), fluorescence lifetime τf and the Stokes shift were calculated. Also the energy gap (Eg) for each dye concentration was evaluated. The results showed that the maximum quantum effi
... Show MoreCompression is the reduction in size of data in order to save space or transmission time. For data transmission, compression can be performed on just the data content or on the entire transmission unit (including header data) depending on a number of factors. In this study, we considered the application of an audio compression method by using text coding where audio compression represented via convert audio file to text file for reducing the time to data transfer by communication channel. Approach: we proposed two coding methods are applied to optimizing the solution by using CFG. Results: we test our application by using 4-bit coding algorithm the results of this method show not satisfy then we proposed a new approach to compress audio fil
... Show MoreThis paper proposes two hybrid feature subset selection approaches based on the combination (union or intersection) of both supervised and unsupervised filter approaches before using a wrapper, aiming to obtain low-dimensional features with high accuracy and interpretability and low time consumption. Experiments with the proposed hybrid approaches have been conducted on seven high-dimensional feature datasets. The classifiers adopted are support vector machine (SVM), linear discriminant analysis (LDA), and K-nearest neighbour (KNN). Experimental results have demonstrated the advantages and usefulness of the proposed methods in feature subset selection in high-dimensional space in terms of the number of selected features and time spe
... Show MoreImage compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurre
... Show More