Image classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven classifiers. A hybrid supervised learning system that takes advantage of rich intermediate features extracted from deep learning compared to traditional feature extraction to boost classification accuracy and parameters is suggested. They provide the same set of characteristics to discover and verify which classifier yields the best classification with our new proposed approach of “hybrid learning.” To achieve this, the performance of classifiers was assessed depending on a genuine dataset that was taken by our camera system. The simulation results show that the support vector machine (SVM) has a mean square error of 0.011, a total accuracy ratio of 98.80%, and an F1 score of 0.99. Moreover, the results show that the LR classifier has a mean square error of 0.035 and a total ratio of 96.42%, and an F1 score of 0.96 comes in the second place. The ANN classifier has a mean square error of 0.047 and a total ratio of 95.23%, and an F1 score of 0.94 comes in the third place. Furthermore, RF, WKNN, DT, and NB with a mean square error and an F1 score advance to the next stage with accuracy ratios of 91.66%, 90.47%, 79.76%, and 75%, respectively. As a result, the main contribution is the enhancement of the classification performance parameters with images of varying brightness and clarity using the proposed hybrid learning approach.
An analytical approach based on field data was used to determine the strength capacity of large diameter bored type piles. Also the deformations and settlements were evaluated for both vertical and lateral loadings. The analytical predictions are compared to field data obtained from a proto-type test pile used at Tharthar –Tigris canal Bridge. They were found to be with acceptable agreement of 12% deviation.
Following ASTM standards D1143M-07e1,2010, a test schedule of five loading cycles were proposed for vertical loads and series of cyclic loads to simulate horizontal loading .The load test results and analytical data of 1.95
... Show MoreA new method presented in this work to detect the existence of hidden
data as a secret message in images. This method must be applyied only on images which have the same visible properties (similar in perspective) where the human eyes cannot detect the difference between them.
This method is based on Image Quality Metrics (Structural Contents
Metric), which means the comparison between the original images and stego images, and determines the size ofthe hidden data. We applied the method to four different images, we detect by this method the hidden data and find exactly the same size of the hidden data.
In this study was undertaken frish fish such as Bigeye Ilisha megaloptera, Nematalos nasus, Suboor Hilsha ilisha and Carp Cyprinus carpio. they were purchased from local marketes in Basrah, Oil was extracted by a solvent extraction method on low temperature. And the level of oil obtiened about (6.08; 10.72; 13.52 and 5.61)% for Bigeye, Jaffout, Suboor and Carp. the Crud oils were compared with vegetable oil (olive oil) and animal fat (tial fat mutton).
The extracted oil from fresh complete fishs with compared oils intraed on pharmacological system through packed in capsul with and with out garlic`s extract. this system analysis with chemical tests.
Results were analyzed statistically by using the SPSS program with using (CRD)
The mucilage was isolated from mustard seeds and identification by some different methods like, thermo gravimetric, FTlR., X-ray powdered, proton NMR, FTIR spectra of the three gums contain different functional group in the gums, major peaks bands noticed were belong to OH (3410.15 – 3010.88) group from hydroxyl group, CH aliphatic (2925-2343.51), C-O (1072.42-1060.85) group and C=O 1743.65, Thermo chemical parameters of mucilage was evaluated and compared with the standard gums, Results indicated the mucilage was decomposed in 392°C and mass loss 55%, The X ray process found the mucilage had single not sharp peak
... Show MoreSeparation of Trigonelline, the major alkaloid in fenugreek seeds, is difficult because the extract of these seeds usually contains Trigonelline, choline, mucilage, and steroidal saponins, in addition to some other substances. This study amis to isolate the quaternary ammonium alkaloid (Trigonelline) and choline from fenugreek seeds (Trigonella-foenum graecum L.) which have similar physiochemical properties by modifying of the classical method. Seeds were defatted and then extracted with methanol. The presence of alkaloids was detected by using Mayer's and Dragendorff's reagents. In this work, trigonilline was isolated with traces of choline by subsequent processes of purification using analytical and preparative TLC techniques.
... Show MoreA new de-blurring technique was proposed in order to reduced or remove the blur in the images. The proposed filter was designed from the Lagrange interpolation calculation with adjusted by fuzzy rules and supported by wavelet decomposing technique. The proposed Wavelet Lagrange Fuzzy filter gives good results for fully and partially blurring region in images.
Semantic segmentation is an exciting research topic in medical image analysis because it aims to detect objects in medical images. In recent years, approaches based on deep learning have shown a more reliable performance than traditional approaches in medical image segmentation. The U-Net network is one of the most successful end-to-end convolutional neural networks (CNNs) presented for medical image segmentation. This paper proposes a multiscale Residual Dilated convolution neural network (MSRD-UNet) based on U-Net. MSRD-UNet replaced the traditional convolution block with a novel deeper block that fuses multi-layer features using dilated and residual convolution. In addition, the squeeze and execution attention mechanism (SE) and the s
... Show MoreThe effect of using three different interpolation methods (nearest neighbour, linear and non-linear) on a 3D sinogram to restore the missing data due to using angular difference greater than 1° (considered as optimum 3D sinogram) is presented. Two reconstruction methods are adopted in this study, the back-projection method and Fourier slice theorem method, from the results the second reconstruction proven to be a promising reconstruction with the linear interpolation method when the angular difference is less than 20°.
The recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show MoreThe searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s
... Show More