Abstract: The utility of DNA sequencing in diagnosing and prognosis of diseases is vital for assessing the risk of genetic disorders, particularly for asymptomatic individuals with a genetic predisposition. Such diagnostic approaches are integral in guiding health and lifestyle decisions and preparing families with the necessary foreknowledge to anticipate potential genetic abnormalities. The present study explores implementing a define-by-run deep learning (DL) model optimized using the Tree-structured Parzen estimator algorithm to enhance the precision of genetic diagnostic tools. Unlike conventional models, the define-by-run model bolsters accuracy through dynamic adaptation to data during the learning process and iterative optimization of critical hyperparameters, such as layer count, neuron count per layer, learning rate, and batch size. Utilizing a diverse dataset comprising DNA sequences fromtwo distinct groups: patients diagnosed with breast cancer and a control group of healthy individuals. The model showcased remarkable performance, with accuracy, precision, recall, F1-score, and area under the curve metrics reaching 0.871, 0.872, 0.871, 0.872, and 0.95, respectively, outperforming previous models. These findings underscore the significant potential of DL techniques in amplifying the accuracy of disease diagnosis and prognosis through DNA sequencing, indicating substantial advancements in personalized medicine and genetic counseling. Collectively, the findings of this investigation suggest that DL presents transformative potential in the landscape of genetic disorder diagnosis and management.
Ration power plants, to generate power, have become common worldwide. One such one is the steam power plant. In such plants, various moving parts of heavy machines generate a lot of noise. Operators are subjected to high levels of noise. High noise level exposure leads to psychological as well physiological problems; different kinds of ill effects. It results in deteriorated work efficiency, although the exact nature of work performance is still unknown. To predict work efficiency deterioration, neuro-fuzzy tools are being used in research. It has been established that a neuro-fuzzy computing system helps in identification and analysis of fuzzy models. The last decade has seen substantial growth in development of various neuro-fuzzy systems
... Show MoreA mathematical method with a new algorithm with the aid of Matlab language is proposed to compute the linear equivalence (or the recursion length) of the pseudo-random key-stream periodic sequences using Fourier transform. The proposed method enables the computation of the linear equivalence to determine the degree of the complexity of any binary or real periodic sequences produced from linear or nonlinear key-stream generators. The procedure can be used with comparatively greater computational ease and efficiency. The results of this algorithm are compared with Berlekamp-Massey (BM) method and good results are obtained where the results of the Fourier transform are more accurate than those of (BM) method for computing the linear equivalenc
... Show MoreThis paper investigated the treatment of textile wastewater polluted with aniline blue (AB) by electrocoagulation process using stainless steel mesh electrodes with a horizontal arrangement. The experimental design involved the application of the response surface methodology (RSM) to find the mathematical model, by adjusting the current density (4-20 mA/cm2), distance between electrodes (0.5-3 cm), salt concentration (50-600 mg/l), initial dye concentration (50-250 mg/l), pH value (2-12 ) and experimental time (5-20 min). The results showed that time is the most important parameter affecting the performance of the electrocoagulation system. Maximum removal efficiency (96 %) was obtained at a current density of 20 mA/cm2, distance be
... Show MoreThis study aimed at identifying the trend to applying the Joint Audit as an approach to improve the financial reports quality with all their characteristics (Relevance, Reliability, Comparability, Consistency), as well as enclose the difficulties that faced the auditors in the Gaza Strip in implementing the Joint Audit. In order to achieve the study aims, a measure was used to identify the trend to apply the Joint Audit and it was distributed to the study sample which is consisting of (119) individuals and retrieved thereof (99) valid for analysis, approximately (83.2%), (69) of them are Auditors, (30) financial managers and accountants. The researcher used the analytical descriptive method, and after analyzing the results, the s
... Show MoreMassive multiple-input multiple-output (massive-MIMO) is a promising technology for next generation wireless communications systems due to its capability to increase the data rate and meet the enormous ongoing data traffic explosion. However, in non-reciprocal channels, such as those encountered in frequency division duplex (FDD) systems, channel state information (CSI) estimation using downlink (DL) training sequence is to date very challenging issue, especially when the channel exhibits a shorter coherence time. In particular, the availability of sufficiently accurate CSI at the base transceiver station (BTS) allows an efficient precoding design in the DL transmission to be achieved, and thus, reliable communication systems can be obtaine
... Show MoreThe key objective of the study is to understand the best processes that are currently used in managing talent in Australian higher education (AHE) and design a quantitative measurement of talent management processes (TMPs) for the higher education (HE) sector.
The three qualitative multi-method studies that are commonly used in empirical studies, namely, brainstorming, focus group discussions and semi-structured individual interviews were considered. Twenty
We are used Bayes estimators for unknown scale parameter when shape Parameter is known of Erlang distribution. Assuming different informative priors for unknown scale parameter. We derived The posterior density with posterior mean and posterior variance using different informative priors for unknown scale parameter which are the inverse exponential distribution, the inverse chi-square distribution, the inverse Gamma distribution, and the standard Levy distribution as prior. And we derived Bayes estimators based on the general entropy loss function (GELF) is used the Simulation method to obtain the results. we generated different cases for the parameters of the Erlang model, for different sample sizes. The estimates have been comp
... Show MoreSeveral stress-strain models were used to predict the strengths of steel fiber reinforced concrete, which are distinctive of the material. However, insufficient research has been done on the influence of hybrid fiber combinations (comprising two or more distinct fibers) on the characteristics of concrete. For this reason, the researchers conducted an experimental program to determine the stress-strain relationship of 30 concrete samples reinforced with two distinct fibers (a hybrid of polyvinyl alcohol and steel fibers), with compressive strengths ranging from 40 to 120 MPa. A total of 80% of the experimental results were used to develop a new empirical stress-strain model, which was accomplished through the application of the parti
... Show MoreA series of Schiff base-bearing salicylaldehyde moiety compounds (1-4) had been designed, synthesized, subjected to insilico ADMET prediction, molecular docking, characterization by FT-IR, and CHNS analysis techniques, and finally to their Anti-inflammatory profile using cyclooxygenase fluorescence inhibitor screening assay methods along with standard drugs, celecoxib, and diclofenac. The ADMET studies were used to predict which compounds would be suitable for oral administration, as well as absorption sites, bioavailability, TPSA, and drug likeness. According to the results of ADME data, all of the produced chemicals can be absorbed through the GIT and have passed Lipinski’s rule of five. Through molecular docking with PyRx 0.8, these
... Show More