Recognizing facial expressions and emotions is a basic skill that is learned at an early age and it is important for human social interaction. Facial expressions are one of the most powerful natural and immediate means that humans use to express their feelings and intentions. Therefore, automatic emotion recognition based on facial expressions become an interesting area in research, which had been introduced and applied in many areas such as security, safety health, and human machine interface (HMI). Facial expression recognition transition from controlled environmental conditions and their improvement and succession of recent deep learning approaches from different areas made facial expression representation mostly based on using a deep neural network that is generally divided into two critical issues. These are a variation of expression and overfitting. Expression variations such as identity bias, head pose, illumination, and overfitting formed as a result of a lack of training data. This paper firstly discussed the general background and terminology utilized in facial expression recognition in field of computer vision and image processing. Secondly, we discussed general pipeline of deep learning. After that, for facial expression recognition to classify emotion there should be datasets in order to compare the image with the datasets for classifying the emotion. Besides that we summarized, discussed, and compared illustrated various recent approaches of researchers that have used deep techniques as a base for facial expression recognition, then we briefly presented and highlighted the classification of the deep feature. Finally, we summarized the most critical challenges and issues that are widely present for overcoming, improving, and designing an efficient deep facial expression recognition system.
In this paper a modified approach have been used to find the approximate solution of ordinary delay differential equations with constant delay using the collocation method based on Bernstien polynomials.
This paper proposes a new encryption method. It combines two cipher algorithms, i.e., DES and AES, to generate hybrid keys. This combination strengthens the proposed W-method by generating high randomized keys. Two points can represent the reliability of any encryption technique. Firstly, is the key generation; therefore, our approach merges 64 bits of DES with 64 bits of AES to produce 128 bits as a root key for all remaining keys that are 15. This complexity increases the level of the ciphering process. Moreover, it shifts the operation one bit only to the right. Secondly is the nature of the encryption process. It includes two keys and mixes one round of DES with one round of AES to reduce the performance time. The W-method deals with
... Show MoreTwelve compounds containing a sulphur- or oxygen-based heterocyclic core, 1,3- oxazole or 1,3-thiazole ring with hydroxy, methoxy and methyl terminal substituent, were synthesized and characterized. The molecular structures of these compounds were performed by elemental analysis and different spectroscopic tequniques. The liquid crystalline behaviors were studied by using hot-stage optical polarizing microscopy and differential scanning calorimetry. All compounds of 1,4- disubstituted benzene core with oxazole ring display liquid crystalline smectic A (SmA) mesophase. The compounds of 1,3- and 1,4-disubstituted benzene core with thiazole ring exhibit exclusively enantiotropic nematic liquid crystal phases.
The aim of this paper is to present a new methodology to find the private key of RSA. A new initial value which is generated from a new equation is selected to speed up the process. In fact, after this value is found, brute force attack is chosen to discover the private key. In addition, for a proposed equation, the multiplier of Euler totient function to find both of the public key and the private key is assigned as 1. Then, it implies that an equation that estimates a new initial value is suitable for the small multiplier. The experimental results show that if all prime factors of the modulus are assigned larger than 3 and the multiplier is 1, the distance between an initial value and the private key
... Show MoreIn this paper reliable computational methods (RCMs) based on the monomial stan-dard polynomials have been executed to solve the problem of Jeffery-Hamel flow (JHF). In addition, convenient base functions, namely Bernoulli, Euler and Laguerre polynomials, have been used to enhance the reliability of the computational methods. Using such functions turns the problem into a set of solvable nonlinear algebraic system that MathematicaⓇ12 can solve. The JHF problem has been solved with the help of Improved Reliable Computational Methods (I-RCMs), and a review of the methods has been given. Also, published facts are used to make comparisons. As further evidence of the accuracy and dependability of the proposed methods, the maximum error remainder
... Show MoreThis research depends on the relationship between the reflected spectrum, the nature of each target, area and the percentage of its presence with other targets in the unity of the target area. The changes occur in Land cover have been detected for different years using satellite images based on the Modified Spectral Angle Mapper (MSAM) processing, where Landsat satellite images are utilized using two software programming (MATLAB 7.11 and ERDAS imagine 2014). The proposed supervised classification method (MSAM) using a MATLAB program with supervised classification method (Maximum likelihood Classifier) by ERDAS imagine have been used to get farthest precise results and detect environmental changes for periods. Despite using two classificatio
... Show MoreFinding communities of connected individuals in complex networks is challenging, yet crucial for understanding different real-world societies and their interactions. Recently attention has turned to discover the dynamics of such communities. However, detecting accurate community structures that evolve over time adds additional challenges. Almost all the state-of-the-art algorithms are designed based on seemingly the same principle while treating the problem as a coupled optimization model to simultaneously identify community structures and their evolution over time. Unlike all these studies, the current work aims to individually consider this three measures, i.e. intra-community score, inter-community score, and evolution of community over
... Show More