Apple slice grading is useful in post-harvest operations for sorting, grading, packaging, labeling, processing, storage, transportation, and meeting market demand and consumer preferences. Proper grading of apple slices can help ensure the quality, safety, and marketability of the final products, contributing to the post-harvest operations of the overall success of the apple industry. The article aims to create a convolutional neural network (CNN) model to classify images of apple slices after immersing them in atmospheric plasma at two different pressures (1 and 5 atm) and two different immersion times (3 and again 6 min) once and in filtered water based on the hardness of the slices using the k-Nearest Neighbors (KNN), Tree, Support Vector Machine (SVM), and Artificial Neural Network (ANN) algorithms. The results showed an inverse relationship between the storage period and the hardness of the apple slices, with the average hardness values gradually decreasing from 4.33 (day 1) to 3.37 (day 5). Treatment with atmospheric plasma at a pressure of 5 atm and an immersion time of 3 min gave the best results for maintaining the hardness of the slices during the storage period, recording values of 4.85 (first day) and 3.68 (fifth day), outperforming other treatments. The average improvement rate was 23.09% over five consecutive days. Regarding the CNN algorithms, the ANN algorithm achieved the highest classification accuracy of 97%, while the Tree algorithm achieved the lowest accuracy of 88.7%. The KNN and SVM algorithms achieved classification accuracies of 94.7% and 95.1%, respectively. The study demonstrated the possibility of using a CNN to classify apple slices based on the degree of hardness. Furthermore, the application of atmospheric plasma at 5 atmospheres with a 3-min immersion improves the firmness of the apple slices by inhibiting degradative enzymes while preserving the cellular structure and tissue quality.
Big data analysis has important applications in many areas such as sensor networks and connected healthcare. High volume and velocity of big data bring many challenges to data analysis. One possible solution is to summarize the data and provides a manageable data structure to hold a scalable summarization of data for efficient and effective analysis. This research extends our previous work on developing an effective technique to create, organize, access, and maintain summarization of big data and develops algorithms for Bayes classification and entropy discretization of large data sets using the multi-resolution data summarization structure. Bayes classification and data discretization play essential roles in many learning algorithms such a
... Show MoreBackground: The main purpose of this study is to find if there is any correlation between the level of C-reactive protein (CRP) in gingival crevicular fluid with its serum level in chronic periodontitis patients and to explore the differences between them according to the probing depth. Materials and methods: Forty seven male subjects enrolled in this study. Thirty males with chronic periodontitis considered as study group whom further subdivided according to probing depth into subgroup 1 with pocket depth ≤6mm, subgroup 2 with pocket depth >6mm. The other 17 subjects considered as controls. For all subjects, clinical examination where done for periodontal parameters plaque index (PLI), gingival index (GI), bleeding on probing (BOP),
... Show MoreContours extraction from two dimensional echocardiographic images has been a challenge in digital image processing. This is essentially due to the heavy noise, poor quality of these images and some artifacts like papillary muscles, intra-cavity structures as chordate, and valves that can interfere with the endocardial border tracking. In this paper, we will present a technique to extract the contours of heart boundaries from a sequence of echocardiographic images, where it started with pre-processing to reduce noise and produce better image quality. By pre-processing the images, the unclear edges are avoided, and we can get an accurate detection of both heart boundary and movement of heart valves.
In this paper, we focus on designing feed forward neural network (FFNN) for solving Mixed Volterra – Fredholm Integral Equations (MVFIEs) of second kind in 2–dimensions. in our method, we present a multi – layers model consisting of a hidden layer which has five hidden units (neurons) and one linear output unit. Transfer function (Log – sigmoid) and training algorithm (Levenberg – Marquardt) are used as a sigmoid activation of each unit. A comparison between the results of numerical experiment and the analytic solution of some examples has been carried out in order to justify the efficiency and the accuracy of our method.
... Show More
The approach of the research is to simulate residual chlorine decay through potable water distribution networks of Gukookcity. EPANET software was used for estimating and predicting chlorine concentration at different water network points . Data requiredas program inputs (pipe properties) were taken from the Baghdad Municipality, factors that affect residual chlorine concentrationincluding (pH ,Temperature, pressure ,flow rate) were measured .Twenty five samples were tested from November 2016 to July 2017.The residual chlorine values varied between ( 0.2-2mg/L) , and pH values varied between (7.6 -8.2) and the pressure was very weak inthis region. Statistical analyses were used to evaluated errors. The calculated concentrations by the calib
... Show MoreDigital image is widely used in computer applications. This paper introduces a proposed method of image zooming based upon inverse slantlet transform and image scaling. Slantlet transform (SLT) is based on the principle of designing different filters for different scales.
First we apply SLT on color image, the idea of transform color image into slant, where large coefficients are mainly the signal and smaller one represent the noise. By suitably modifying these coefficients , using scaling up image by box and Bartlett filters so that the image scales up to 2X2 and then inverse slantlet transform from modifying coefficients using to the reconstructed image .
&nbs
... Show MoreSteganography is a mean of hiding information within a more obvious form of
communication. It exploits the use of host data to hide a piece of information in such a way
that it is imperceptible to human observer. The major goals of effective Steganography are
High Embedding Capacity, Imperceptibility and Robustness. This paper introduces a scheme
for hiding secret images that could be as much as 25% of the host image data. The proposed
algorithm uses orthogonal discrete cosine transform for host image. A scaling factor (a) in
frequency domain controls the quality of the stego images. Experimented results of secret
image recovery after applying JPEG coding to the stego-images are included.