The study consists of video clips of all cars parked in the selected area. The studied camera height is1.5 m, and the video clips are 18video clips. Images are extracted from the video clip to be used for training data for the cascade method. Cascade classification is used to detect license plates after the training step. Viola-jones algorithm was applied to the output of the cascade data for camera height (1.5m). The accuracy was calculated for all data with different weather conditions and local time recoding in two ways. The first used the detection of the car plate based on the video clip, and the accuracy was 100%. The second is using the clipped images stored in the positive file, based on the training file (XML file), where the accuracy was 99.8%.
A study is made about the size and dynamic activity of sunspot using automatically detecting Matlab code ''mySS .m'' written for this purpose which mainly finds a good estimate about Sunspot diameter (in km). Theory of the Sunspot size has been described using equations, where the growth and decay phases and the area of Sunspot could be calculated. Two types of images, namely H-alpha and HMI magnetograms, have been implemented. The results are divided into four main parts. The first part is sunspot size automatic detection by the Matlab program. The second part is numerical calculations of Sunspot growth and decay phases. The third part is the calculation of Sunspot area. The final part is to explain the Sunspot activit
... Show MoreMobile-based human emotion recognition is very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad and normal. In this system, the smartphone is used to record user speech and send it to a server. The smartwatch, fixed on user wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth,
... Show MoreThe palm vein recognition is one of the biometric systems that use for identification and verification processes since each person have unique characteristics for the veins. In this paper we can improvement palm vein recognition system have been made. The system based on centerline extraction of veins, and employs the concept of Difference-of Gaussian (DoG) Function to construct features vector. The tests results on our database showed that the identification rate is 100 % with the minimum error rate was 0.333.
In current generation of technology, a robust security system is required based on biometric trait such as human gait, which is a smooth biometric feature to understand humans via their taking walks pattern. In this paper, a person is recognized based on his gait's style that is captured from a video motion previously recorded with a digital camera. The video package is handled via more than one phase after splitting it into a successive image (called frames), which are passes through a preprocessing step earlier than classification procedure operation. The pre-processing steps encompass converting each image into a gray image, cast off all undesirable components and ridding it from noise, discover differen
... Show MoreThis work explores the designing a system of an automated unmanned aerial vehicles (UAV( for objects detection, labelling, and localization using deep learning. This system takes pictures with a low-cost camera and uses a GPS unit to specify the positions. The data is sent to the base station via Wi-Fi connection.
The proposed system consists of four main parts. First, the drone, which was assembled and installed, while a Raspberry Pi4 was added and the flight path was controlled. Second, various programs that were installed and downloaded to define the parts of the drone and its preparation for flight. In addition, this part included programs for both Raspberry Pi4 and servo, along with protocols for communication, video transmi
... Show MoreBiometrics represent the most practical method for swiftly and reliably verifying and identifying individuals based on their unique biological traits. This study addresses the increasing demand for dependable biometric identification systems by introducing an efficient approach to automatically recognize ear patterns using Convolutional Neural Networks (CNNs). Despite the widespread adoption of facial recognition technologies, the distinct features and consistency inherent in ear patterns provide a compelling alternative for biometric applications. Employing CNNs in our research automates the identification process, enhancing accuracy and adaptability across various ear shapes and orientations. The ear, being visible and easily captured in
... Show MoreHuman action recognition has gained popularity because of its wide applicability, such as in patient monitoring systems, surveillance systems, and a wide diversity of systems that contain interactions between people and electrical devices, including human computer interfaces. The proposed method includes sequential stages of object segmentation, feature extraction, action detection and then action recognition. Effective results of human actions using different features of unconstrained videos was a challenging task due to camera motion, cluttered background, occlusions, complexity of human movements, and variety of same actions performed by distinct subjects. Thus, the proposed method overcomes such problems by using the fusion of featur
... Show MoreEmotion recognition has important applications in human-computer interaction. Various sources such as facial expressions and speech have been considered for interpreting human emotions. The aim of this paper is to develop an emotion recognition system from facial expressions and speech using a hybrid of machine-learning algorithms in order to enhance the overall performance of human computer communication. For facial emotion recognition, a deep convolutional neural network is used for feature extraction and classification, whereas for speech emotion recognition, the zero-crossing rate, mean, standard deviation and mel frequency cepstral coefficient features are extracted. The extracted features are then fed to a random forest classifier. In
... Show More