Coumarins have been recognized as anticancer competitors. HDACis are one of the interesting issues in the field of antitumor research. In order to achieve an increased anticancer efficacy, a series of hybrid compounds bearing coumarin scaffolds have been designed and synthesized as novel HDACis, In this review we present a series of novel HDAC inhibitors comprising coumarin as a core e of cap group of HDAC inhibitors that have been designed, synthesized and assessed for their enzyme inhibitory activity as well as antiproliferative activity. Most of them exhibited potent HDAC inhibitory activity and significant cytotoxicity
A new, Simple, sensitive and accurate spectrophotometric methods have been developed for the determination of sulfanilamide (SNA) drug in pure and in synthetic sample. This method based on the reaction of sulfanilamide (SNA) with 1,2-napthoquinone-4-sulphonic acid (NQS) to form N-alkylamono naphthoquinone by replacement of the sulphonate group of the naphthoquinone sulphonic acid by an amino group. The colored chromogen shows absorption maximum at 455 nm. The optimum conditions of condensation reaction forms were investigated by: (1) univariable method, by optimizing the effect of experimental variables; (different bases, reagent concentration, borax concentration and reaction time), (2) central composite design (CCD) including
... Show MoreLaser is a powerful device that has a wide range of applications in fields ranging from materials science and manufacturing to medicine and fibre optic communications. One remarkable
By definition, the detection of protein complexes that form protein-protein interaction networks (PPINs) is an NP-hard problem. Evolutionary algorithms (EAs), as global search methods, are proven in the literature to be more successful than greedy methods in detecting protein complexes. However, the design of most of these EA-based approaches relies on the topological information of the proteins in the PPIN. Biological information, as a key resource for molecular profiles, on the other hand, acquired a little interest in the design of the components in these EA-based methods. The main aim of this paper is to redesign two operators in the EA based on the functional domain rather than the graph topological domain. The perturb
... Show MoreComputer models are used in the study of electrocardiography to provide insight into physiological phenomena that are difficult to measure in the lab or in a clinical environment.
The electrocardiogram is an important tool for the clinician in that it changes characteristically in a number of pathological conditions. Many illnesses can be detected by this measurement. By simulating the electrical activity of the heart one obtains a quantitative relationship between the electrocardiogram and different anomalies.
Because of the inhomogeneous fibrous structure of the heart and the irregular geometries of the body, finite element method is used for studying the electrical properties of the heart.
This work describes t
... Show MoreThe internet of medical things (IoMT), which is expected the lead to the biggest technology in worldwide distribution. Using 5th generation (5G) transmission, market possibilities and hazards related to IoMT are improved and detected. This framework describes a strategy for proactively addressing worries and offering a forum to promote development, alter attitudes and maintain people's confidence in the broader healthcare system without compromising security. It is combined with a data offloading system to speed up the transmission of medical data and improved the quality of service (QoS). As a result of this development, we suggested the enriched energy efficient fuzzy (EEEF) data offloading technique to enhance the delivery of dat
... Show MoreThe meniscus has a crucial function in human anatomy, and Magnetic Resonance Imaging (M.R.I.) plays an essential role in meniscus assessment. It is difficult to identify cartilage lesions using typical image processing approaches because the M.R.I. data is so diverse. An M.R.I. data sequence comprises numerous images, and the attributes area we are searching for may differ from each image in the series. Therefore, feature extraction gets more complicated, hence specifically, traditional image processing becomes very complex. In traditional image processing, a human tells a computer what should be there, but a deep learning (D.L.) algorithm extracts the features of what is already there automatically. The surface changes become valuable when
... Show More