The smart city concept has attracted high research attention in recent years within diverse application domains, such as crime suspect identification, border security, transportation, aerospace, and so on. Specific focus has been on increased automation using data driven approaches, while leveraging remote sensing and real-time streaming of heterogenous data from various resources, including unmanned aerial vehicles, surveillance cameras, and low-earth-orbit satellites. One of the core challenges in exploitation of such high temporal data streams, specifically videos, is the trade-off between the quality of video streaming and limited transmission bandwidth. An optimal compromise is needed between video quality and subsequently, recognition and understanding and efficient processing of large amounts of video data. This research proposes a novel unified approach to lossy and lossless video frame compression, which is beneficial for the autonomous processing and enhanced representation of high-resolution video data in various domains. The proposed fast block matching motion estimation technique, namely mean predictive block matching, is based on the principle that general motion in any video frame is usually coherent. This coherent nature of the video frames dictates a high probability of a macroblock having the same direction of motion as the macroblocks surrounding it. The technique employs the partial distortion elimination algorithm to condense the exploration time, where partial summation of the matching distortion between the current macroblock and its contender ones will be used, when the matching distortion surpasses the current lowest error. Experimental results demonstrate the superiority of the proposed approach over state-of-the-art techniques, including the four step search, three step search, diamond search, and new three step search.
In this paper, a new equivalent lumped parameter model is proposed for describing the vibration of beams under the moving load effect. Also, an analytical formula for calculating such vibration for low-speed loads is presented. Furthermore, a MATLAB/Simulink model is introduced to give a simple and accurate solution that can be used to design beams subjected to any moving loads, i.e., loads of any magnitude and speed. In general, the proposed Simulink model can be used much easier than the alternative FEM software, which is usually used in designing such beams. The obtained results from the analytical formula and the proposed Simulink model were compared with those obtained from Ansys R19.0, and very good agreement has been shown. I
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreThis paper presents a study of the application of gas lift (GL) to improve oil production in a Middle East field. The field has been experiencing a rapid decline in production due to a drop in reservoir pressure. GL is a widely used artificial lift technique that can be used to increase oil production by reducing the hydrostatic pressure in the wellbore. The study used a full field model to simulate the effects of GL on production. The model was run under different production scenarios, including different water cut and reservoir pressure values. The results showed that GL can significantly increase oil production under all scenarios. The study also found that most wells in the field will soon be closed due to high water cuts. Howev
... Show MoreIn this work magnetite/geopolymer composite (MGP) were synthesized using a chemical co-precipitation technique. The synthesized materials were characterized using several techniques such as: “X-ray diffraction (XRD), Fourier-transform infrared spectroscopy (FTIR), vibrating sample-magnetometer (VSM), field-emission scanning electron microscopy (FE-SEM), energy dispersive X-ray spectroscopy (EDS), Brunauer–Emmett–Teller (BET) and Barrentt-Joyner-Halenda (BJH)” to determine the structure and morphology of the obtained material. The analysis indicated that metal oxide predominantly appeared at the shape of the spinel structure of magnetite, and that the presence of nano-magnetite had a substantial impact on the surface area and pore st
... Show MoreTime and space are indispensable basics in cinematic art. They contain the characters, their actions and the nature of events, as well as their expressive abilities to express many ideas and information. However, the process of collecting space and time in one term is space-time, and it is one of Einstein’s theoretical propositions, who sees that Time is an added dimension within the place, so the study here differs from the previous one, and this is what the researcher determined in the topic of his research, which was titled (The Dramatic Function of Space-Time Variables in the Narrative Film), Which included the following: The research problem, which crystallized in the following question: What is the dramatic function of the tempor
... Show MoreFG Mohammed, HM Al-Dabbas, Iraqi journal of science, 2018 - Cited by 6
The searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s
... Show More<p>The current work investigated the combustion efficiency of biodiesel engines under diverse ratios of compression (15.5, 16.5, 17.5, and 18.5) and different biodiesel fuels produced from apricot oil, papaya oil, sunflower oil, and tomato seed oil. The combustion process of the biodiesel fuel inside the engine was simulated utilizing ANSYS Fluent v16 (CFD). On AV1 diesel engines (Kirloskar), numerical simulations were conducted at 1500 rpm. The outcomes of the simulation demonstrated that increasing the compression ratio (CR) led to increased peak temperature and pressures in the combustion chamber, as well as elevated levels of CO<sub>2</sub> and NO mass fractions and decreased CO emission values un
... Show MoreMost of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B
... Show More