Standardized uptake values, often known as SUVs, are frequently utilized in the process of measuring 18F-fluorodeoxyglucose (FDG) uptake in malignancies . In this work, we investigated the relationships between a wide range of parameters and the standardized uptake values (SUV) found in the liver. Examinations with 18F-FDG PET/CT were performed on a total of 59 patients who were suffering from liver cancer. We determined the SUV in the liver of patients who had a normal BMI (between 18.5 and 24.9) and a high BMI (above 30) obese. After adjusting each SUV based on the results of the body mass index (BMI) and body surface area (BSA) calculations, which were determined for each patient based on their height and weight. Under a variety of different circumstances, SUVs were evaluated based on their means and standard deviations. Scatterplots were created to illustrate the various weight and SUV variances. In addition to that, the SUVs that are appropriate for each age group were determined. SUVmax in the liver was statistical significantly in obese BMI and higher BSA, p- value <0.001). Age appeared to be the most important predictor of SUVmax and was significantly associated with the liver SUVmax with mean value (58.93±13.57). Conclusions: Age is a factor that contributes to variations in the SUVs of the liver. These age-related disparities in SUV have been elucidated as a result of our findings, which may help clinicians in doing more accurate assessments of malignancies. However, the SUV overestimates the metabolic activity of each and every individual, and this overestimation is far more severe in people who are obese compared to people who have a body mass index that is normal (BMI
Evolutionary algorithms (EAs), as global search methods, are proved to be more robust than their counterpart local heuristics for detecting protein complexes in protein-protein interaction (PPI) networks. Typically, the source of robustness of these EAs comes from their components and parameters. These components are solution representation, selection, crossover, and mutation. Unfortunately, almost all EA based complex detection methods suggested in the literature were designed with only canonical or traditional components. Further, topological structure of the protein network is the main information that is used in the design of almost all such components. The main contribution of this paper is to formulate a more robust EA wit
... Show MoreMPEG-DASH is an adaptive bitrate streaming technology that divides video content into small HTTP-objects file segments with different bitrates. With live UHD video streaming latency is the most important problem. In this paper, creating a low-delay streaming system using HTTP 2.0. Based on the network condition the proposed system adaptively determine the bitrate of segments. The video is coded using a layered H.265/HEVC compression standard, then is tested to investigate the relationship between video quality and bitrate for various HEVC parameters and video motion at each layer/resolution. The system architecture includes encoder/decoder configurations and how to embedded the adaptive video streaming. The encoder includes compression besi
... Show MoreThe issue of image captioning, which comprises automatic text generation to understand an image’s visual information, has become feasible with the developments in object recognition and image classification. Deep learning has received much interest from the scientific community and can be very useful in real-world applications. The proposed image captioning approach involves the use of Convolution Neural Network (CNN) pre-trained models combined with Long Short Term Memory (LSTM) to generate image captions. The process includes two stages. The first stage entails training the CNN-LSTM models using baseline hyper-parameters and the second stage encompasses training CNN-LSTM models by optimizing and adjusting the hyper-parameters of
... Show MoreEvolutionary algorithms (EAs), as global search methods, are proved to be more robust than their counterpart local heuristics for detecting protein complexes in protein-protein interaction (PPI) networks. Typically, the source of robustness of these EAs comes from their components and parameters. These components are solution representation, selection, crossover, and mutation. Unfortunately, almost all EA based complex detection methods suggested in the literature were designed with only canonical or traditional components. Further, topological structure of the protein network is the main information that is used in the design of almost all such components. The main contribution of this paper is to formulate a more robust E
... Show MoreA hand gesture recognition system provides a robust and innovative solution to nonverbal communication through human–computer interaction. Deep learning models have excellent potential for usage in recognition applications. To overcome related issues, most previous studies have proposed new model architectures or have fine-tuned pre-trained models. Furthermore, these studies relied on one standard dataset for both training and testing. Thus, the accuracy of these studies is reasonable. Unlike these works, the current study investigates two deep learning models with intermediate layers to recognize static hand gesture images. Both models were tested on different datasets, adjusted to suit the dataset, and then trained under different m
... Show MoreIn this research, the effect of reinforcing epoxy resin composites with a filler derived from chopped agriculture waste from oil palm (OP). Epoxy/OP composites were formed by dispersing (1, 3, 5, and 10 wt%) OP filler using a high-speed mechanical stirrer utilizing a hand lay-up method. The effect of adding zinc oxide (ZnO) nanoparticles, with an average size of 10-30 nm, with different wt% (1,2,3, and 5wt%) to the epoxy/oil palm composite, on the behavior of an epoxy/oil palm composite was studied with different ratios (1,2,3, and 5wt%) and an average size of 10-30 nm. Fourier Transform Infrared (FTIR) spectrometry and mechanical properties (tensile, impact, hardness, and wear rate) were used to examine the composites. The FTIR
... Show MoreAs a result of the significance of image compression in reducing the volume of data, the requirement for this compression permanently necessary; therefore, will be transferred more quickly using the communication channels and kept in less space in memory. In this study, an efficient compression system is suggested; it depends on using transform coding (Discrete Cosine Transform or bi-orthogonal (tap-9/7) wavelet transform) and LZW compression technique. The suggested scheme was applied to color and gray models then the transform coding is applied to decompose each color and gray sub-band individually. The quantization process is performed followed by LZW coding to compress the images. The suggested system was applied on a set of seven stand
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others
... Show MoreElastic magnetic M1 electron scattering form factor has been calculated for the ground state J,T=1/2-,1/2 of 13C. The single-particle model is used with harmonic oscillator wave function. The core-polarization effects are calculated in the first-order perturbation theory including excitations up to 5ħω, using the modified surface delta interaction (MSDI) as a residual interaction. No parameters are introduced in this work. The data are reasonably explained up to q~2.5fm-1 .
A mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the
... Show More