Skull image separation is one of the initial procedures used to detect brain abnormalities. In an MRI image of the brain, this process involves distinguishing the tissue that makes up the brain from the tissue that does not make up the brain. Even for experienced radiologists, separating the brain from the skull is a difficult task, and the accuracy of the results can vary quite a little from one individual to the next. Therefore, skull stripping in brain magnetic resonance volume has become increasingly popular due to the requirement for a dependable, accurate, and thorough method for processing brain datasets. Furthermore, skull stripping must be performed accurately for neuroimaging diagnostic systems since neither non-brain tissues nor the removal of brain sections can be addressed in the subsequent steps, resulting in an unfixed mistake during further analysis. Therefore, accurate skull stripping is necessary for neuroimaging diagnostic systems. This paper proposes a system based on deep learning and Image processing, an innovative method for converting a pre-trained model into another type of pre-trainer using pre-processing operations and the CLAHE filter as a critical phase. The global IBSR data set was used as a test and training set. For the system's efficacy, work was performed based on the principle of three dimensions and three sections of MR images and two-dimensional images, and the results were 99.9% accurate.
The development of Web 2.0 has improved people's ability to share their opinions. These opinions serve as an important piece of knowledge for other reviewers. To figure out what the opinions is all about, an automatic system of analysis is needed. Aspect-based sentiment analysis is the most important research topic conducted to extract reviewers-opinions about certain attribute, for instance opinion-target (aspect). In aspect-based tasks, the identification of the implicit aspect such as aspects implicitly implied in a review, is the most challenging task to accomplish. However, this paper strives to identify the implicit aspects based on hierarchical algorithm incorporated with common-sense knowledge by means of dimensionality reduction.
Image quality plays a vital role in improving and assessing image compression performance. Image compression represents big image data to a new image with a smaller size suitable for storage and transmission. This paper aims to evaluate the implementation of the hybrid techniques-based tensor product mixed transform. Compression and quality metrics such as compression-ratio (CR), rate-distortion (RD), peak signal-to-noise ratio (PSNR), and Structural Content (SC) are utilized for evaluating the hybrid techniques. Then, a comparison between techniques is achieved according to these metrics to estimate the best technique. The main contribution is to improve the hybrid techniques. The proposed hybrid techniques are consisting of discrete wavel
... Show MoreThe spatial assessment criteria system for hybridizing renewable energy sources, such as hybrid solar-wind farms, is critical in selecting ideal installation sites that maximize benefits, reduce costs, protect the environment, and serve the community. However, a systematic approach to designing indicator systems is rarely used in relevant site selection studies. Therefore, the current paper attempts to present an inclusive framework based on content validity to create an effective criteria system for siting wind-solar plants. To this end, the criteria considered in the related literature are captured, and the top 10 frequent indicators are identified. The Delphi technique is used to subject commonly used factors to expert judgme
... Show MoreFace Identification system is an active research area in these years. However, the accuracy and its dependency in real life systems are still questionable. Earlier research in face identification systems demonstrated that LBP based face recognition systems are preferred than others and give adequate accuracy. It is robust against illumination changes and considered as a high-speed algorithm. Performance metrics for such systems are calculated from time delay and accuracy. This paper introduces an improved face recognition system that is build using C++ programming language with the help of OpenCV library. Accuracy can be increased if a filter or combinations of filters are applied to the images. The accuracy increases from 95.5% (without ap
... Show MoreThe physical substance at high energy level with specific circumstances; tend to behave harsh and complicated, meanwhile, sustaining equilibrium or non-equilibrium thermodynamic of the system. Measurement of the temperature by ordinary techniques in these cases is not applicable at all. Likewise, there is a need to apply mathematical models in numerous critical applications to measure the temperature accurately at an atomic level of the matter. Those mathematical models follow statistical rules with different distribution approaches of quantities energy of the system. However, these approaches have functional effects at microscopic and macroscopic levels of that system. Therefore, this research study represents an innovative of a wi
... Show MoreIn this research, we will discuss how to improve the work by dealing with the factors that
participates in enhancing small IT organization to produce the software using the suitable
development process supported by experimental theories to achieve the goals. Starting from
the selecting of the methodology to implement the software. The steps used are and should be
compatible with the type of the products the organization will produce and here it is the Web-Based Project Development.
The researcher suggest Extreme Programming (XP) as a methodology for the Web-Based
Project Development and justifying this suggestion and that will guide to know how the
methodology is very important and effective in the software dev
Orthogonal Frequency Division Multiplexing (OFDM) is an efficient multi-carrier technique.The core operation in the OFDM systems is the FFT/IFFT unit that requires a large amount of hardware resources and processing delay. The developments in implementation techniques likes Field Programmable Gate Array (FPGA) technologies have made OFDM a feasible option. The goal of this paper is to design and implement an OFDM transmitter based on Altera FPGA using Quartus software. The proposed transmitter is carried out to simplify the Fourier transform calculation by using decoder instead of multipliers. After programming ALTERA DE2 FPGA kit with implemented project, several practical tests have been done starting from monitoring all the results of
... Show MoreThe present study investigates deep eutectic solvents (DESs) as potential media for enzymatic hydrolysis. A series of ternary ammonium and phosphonium-based DESs were prepared at different molar ratios by mixing with aqueous glycerol (85%). The physicochemical properties including surface tension, conductivity, density, and viscosity were measured at a temperature range of 298.15 K – 363.15 K. The eutectic points were highly influenced by the variation of temperature. The eutectic point of the choline chloride: glycerol: water (ratio of 1: 2.55: 2.28) and methyltriphenylphosphonium bromide:glycerol:water (ratio of 1: 4.25: 3.75) is 213.4 K and 255.8 K, respectively. The stability of the lipase enzyme isolated from porcine pancreas (PPL) a
... Show MoreThe Internet of Things (IoT) is a network of devices used for interconnection and data transfer. There is a dramatic increase in IoT attacks due to the lack of security mechanisms. The security mechanisms can be enhanced through the analysis and classification of these attacks. The multi-class classification of IoT botnet attacks (IBA) applied here uses a high-dimensional data set. The high-dimensional data set is a challenge in the classification process due to the requirements of a high number of computational resources. Dimensionality reduction (DR) discards irrelevant information while retaining the imperative bits from this high-dimensional data set. The DR technique proposed here is a classifier-based fe
... Show MoreExtractive multi-document text summarization – a summarization with the aim of removing redundant information in a document collection while preserving its salient sentences – has recently enjoyed a large interest in proposing automatic models. This paper proposes an extractive multi-document text summarization model based on genetic algorithm (GA). First, the problem is modeled as a discrete optimization problem and a specific fitness function is designed to effectively cope with the proposed model. Then, a binary-encoded representation together with a heuristic mutation and a local repair operators are proposed to characterize the adopted GA. Experiments are applied to ten topics from Document Understanding Conference DUC2002 datas
... Show More