Emotion recognition has important applications in human-computer interaction. Various sources such as facial expressions and speech have been considered for interpreting human emotions. The aim of this paper is to develop an emotion recognition system from facial expressions and speech using a hybrid of machine-learning algorithms in order to enhance the overall performance of human computer communication. For facial emotion recognition, a deep convolutional neural network is used for feature extraction and classification, whereas for speech emotion recognition, the zero-crossing rate, mean, standard deviation and mel frequency cepstral coefficient features are extracted. The extracted features are then fed to a random forest classifier. In addition, a bi-modal system for recognising emotions from facial expressions and speech signals is presented. This is important since one modality may not provide sufficient information or may not be available for any reason beyond operator control. To perform this, decision-level fusion is performed using a novel way for weighting according to the proportions of facial and speech impressions. The results show an average accuracy of 93.22 %.
This paper critically looks at the studies that investigated the Social Network Sites in the Arab region asking whether they made a practical addition to the field of information and communication sciences or not. The study tried to lift the ambiguity of the variety of names, as well as the most important theoretical and methodological approaches used by these studies highlighting its scientific limitations. The research discussed the most important concepts used by these studies such as Interactivity, Citizen Journalism, Public Sphere, and Social Capital and showed the problems of using them because each concept comes out of a specific view to these websites. The importation of these concepts from a cultural and social context to an Ara
... Show MoreThe limitations of wireless sensor nodes are power, computational capabilities, and memory. This paper suggests a method to reduce the power consumption by a sensor node. This work is based on the analogy of the routing problem to distribute an electrical field in a physical media with a given density of charges. From this analogy a set of partial differential equations (Poisson's equation) is obtained. A finite difference method is utilized to solve this set numerically. Then a parallel implementation is presented. The parallel implementation is based on domain decomposition, where the original calculation domain is decomposed into several blocks, each of which given to a processing element. All nodes then execute computations in parall
... Show MoreAssessment the actual accuracy of laboratory devices prior to first use is very important to know the capabilities of such devices and employ them in multiple domains. As the manual of the device provides information and values in laboratory conditions for the accuracy of these devices, thus the actual evaluation process is necessary.
In this paper, the accuracy of laser scanner (stonex X-300) cameras were evaluated, so that those cameras attached to the device and lead supporting role in it. This is particularly because the device manual did not contain sufficient information about those cameras.
To know the accuracy when using these cameras in close range photogrammetry, laser scanning (stonex X-300) de
... Show MoreThis paper examines the change in planning pattern In Lebanon, which relies on vehicles as a semi-single mode of transport, and directing it towards re-shaping the city and introducing concepts of "smooth or flexible" mobility in its schemes; the concept of a "compact city" with an infrastructure based on a flexible mobility culture. Taking into consideration environmental, economical and health risks of the existing model, the paper focuses on the four foundations of the concepts of "city based on culture flexible mobility, "and provides a SWOT analysis to encourage for a shift in the planning methodology.
Imitation learning is an effective method for training an autonomous agent to accomplish a task by imitating expert behaviors in their demonstrations. However, traditional imitation learning methods require a large number of expert demonstrations in order to learn a complex behavior. Such a disadvantage has limited the potential of imitation learning in complex tasks where the expert demonstrations are not sufficient. In order to address the problem, we propose a Generative Adversarial Network-based model which is designed to learn optimal policies using only a single demonstration. The proposed model is evaluated on two simulated tasks in comparison with other methods. The results show that our proposed model is capable of completing co
... Show MoreMobile ad-hoc networks (MANETs) are composed of mobile nodes communicating through wireless medium, without any fixed centralized infrastructure. Providing quality of service (QoS) support to multimedia streaming applications over MANETs is vital. This paper focuses on QoS support, provided by the stream control transmission protocol (SCTP) and the TCP-friendly rate control (TFRC) protocol to multimedia streaming applications over MANETs. In this study, three QoS parameters were considered jointly: (1) packet delivery ratio (PDR), (2) end-to-end delay, (3) and throughput. Specifically, the authors analyzed and compared the simulated performance of the SCTP and TFRC transport protocols for delivering multimedia streaming over MANETs.
... Show MoreA genetic algorithm model coupled with artificial neural network model was developed to find the optimal values of upstream, downstream cutoff lengths, length of floor and length of downstream protection required for a hydraulic structure. These were obtained for a given maximum difference head, depth of impervious layer and degree of anisotropy. The objective function to be minimized was the cost function with relative cost coefficients for the different dimensions obtained. Constraints used were those that satisfy a factor of safety of 2 against uplift pressure failure and 3 against piping failure.
Different cases reaching 1200 were modeled and analyzed using geo-studio modeling, with different values of input variables. The soil wa
A model using the artificial neural networks and genetic algorithm technique is developed for obtaining optimum dimensions of the foundation length and protections of small hydraulic structures. The procedure involves optimizing an objective function comprising a weighted summation of the state variables. The decision variables considered in the optimization are the upstream and downstream cutoffs lengths and their angles of inclination, the foundation length, and the length of the downstream soil protection. These were obtained for a given maximum difference in head, depth of impervious layer and degree of anisotropy. The optimization carried out is subjected to constraints that ensure a safe structure aga
... Show MoreThis study reported the investigation of the Radio Frequency (RF) signal propagation of Global System for Mobile Communications (GSM) coverage in Emmanuel Alayande College of Education (EACOED), Oyo, Oyo State, Nigeria. The study aims at amplifying the quality of service and augment end users' sensitivity of the wireless services operation. The drive test method is adopted with estimation of coverage level and received signal strength. The Network Cell Info Lite application installed in three INFINIX GSM mobile phones was employed to take the measurement of the signal strength received from the transmitting stations of different mobile networks. The results of the study revealed that MTN has the maximum signal strength with a mean value
... Show MoreBreast cancer is a heterogeneous disease characterized by molecular complexity. This research utilized three genetic expression profiles—gene expression, deoxyribonucleic acid (DNA) methylation, and micro ribonucleic acid (miRNA) expression—to deepen the understanding of breast cancer biology and contribute to the development of a reliable survival rate prediction model. During the preprocessing phase, principal component analysis (PCA) was applied to reduce the dimensionality of each dataset before computing consensus features across the three omics datasets. By integrating these datasets with the consensus features, the model's ability to uncover deep connections within the data was significantly improved. The proposed multimodal deep
... Show More