Today, the science of artificial intelligence has become one of the most important sciences in creating intelligent computer programs that simulate the human mind. The goal of artificial intelligence in the medical field is to assist doctors and health care workers in diagnosing diseases and clinical treatment, reducing the rate of medical error, and saving lives of citizens. The main and widely used technologies are expert systems, machine learning and big data. In the article, a brief overview of the three mentioned techniques will be provided to make it easier for readers to understand these techniques and their importance.
Chronic viral hepatitis is an important health problem in the world, where hepatitis B virus (HBV) or hepatitis C virus (HCV) infections are the main causes of liver insufficiency. The study included 100 blood samples from patients with chronic viral hepatitis, fifty of them with HBV infection and 50 with HCV infection. Twenty apparently healthy age and gender matched subjects were included as a control group. Out of the 50 patients with HBV, 36(72%) were males and 14(28%) were females. Thirty two (64%) patients with HCV were males and 18(36%) were females. The mean age for HBV patients was 36.9 ± 15.8 year and for HCV patients it was 39.9 ±14.2 year. The results of the liver function tests showed no significant differ
... Show MoreThis paper includes a comparison between denoising techniques by using statistical approach, principal component analysis with local pixel grouping (PCA-LPG), this procedure is iterated second time to further improve the denoising performance, and other enhancement filters were used. Like adaptive Wiener low pass-filter to a grayscale image that has been degraded by constant power additive noise, based on statistics estimated from a local neighborhood of each pixel. Performs Median filter of the input noisy image, each output pixel contains the Median value in the M-by-N neighborhood around the corresponding pixel in the input image, Gaussian low pass-filter and Order-statistic filter also be used. Experimental results shows LPG-PCA method
... Show MoreIn this paper we study and design two feed forward neural networks. The first approach uses radial basis function network and second approach uses wavelet basis function network to approximate the mapping from the input to the output space. The trained networks are then used in an conjugate gradient algorithm to estimate the output. These neural networks are then applied to solve differential equation. Results of applying these algorithms to several examples are presented
Background: the condition of hallux valgus is considered as the most common deformities affecting females more than males, characteristically manifested as lateral deviation of the big toe and widening of first and second inter -metatarsal angle with a deformity of second toe in some severe cases. Objective: to make a radiological and clinical assessment of two surgical methods of osteotomy used in treatment of hallux valgu and to compare between them: first one is the distal dome osteotomy, and second one is a distal wedge metatarsal osteotomy. Patients and methods: a total of 36 feet of 28 patients suffer from hallux valgus, with mean age of 50.3 years were included in this study, followed for 6- 30 months ( mean follow-up of 8.8 months).
... Show MoreAverage interstellar extinction curves for Galaxy and Large Magellanic Cloud (LMC) over the range of wavelengths (1100 A0 – 3200 A0) were obtained from observations via IUE satellite. The two extinctions of our galaxy and LMC are normalized to Av=0 and E (B-V)=1, to meat standard criteria. It is found that the differences between the two extinction curves appeared obviously at the middle and far ultraviolet regions due to the presence of different populations of small grains, which have very little contribution at longer wavelengths. Using new IUE-Reduction techniques lead to more accurate result.
This paper includes a comparison between denoising techniques by using statistical approach, principal component analysis with local pixel grouping (PCA-LPG), this procedure is iterated second time to further improve the denoising performance, and other enhancement filters were used. Like adaptive Wiener low pass-filter to a grayscale image that has been degraded by constant power additive noise, based on statistics estimated from a local neighborhood of each pixel. Performs Median filter of the input noisy image, each output pixel contains the Median value in the M-by-N neighborhood around the corresponding pixel in the input image, Gaussian low pass-filter and Order-statistic filter also be used.
Experimental results shows LPG-
... Show MoreThe purpose of the research is to present a proposed accounting system model for converting and aggregating accounting information within the framework of the differentiated accounting systems, and the research methodology consists of: The research problem is the existence of differentiated and dispersed accounting systems that operate within governmental economic units and at the same time seek to achieve a unified vision and goals for the organization, and the central research hypothesis is the possibility of conducting the process of conversion accounting information from the government accounting system to the unified accounting system, and then aggregate those systems. The research was conducted at the College of Administrat
... Show MoreSnakes, or active contours, are used extensively in computer vision and image processing applications, particularly to locate object boundaries. In this research, for the segmentation of anatomical structures in medical images three approaches were implemented and compared like (original snake model, distance potential force model and Gradient Vector Flow (GVF) snake model). We used Computed Tomography image (CT) for our experiments. Our experiments show that original snake model has two problems; first is the limited capture range and the second is the poor convergence. Distance potential force model solved only the first problem of original snake and failed with second problem. Gradient Vector Flow (GVF snake) provides a go
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for