Autism is a lifelong developmental deficit that affects how people perceive the world and interact with each others. An estimated one in more than 100 people has autism. Autism affects almost four times as many boys than girls. The commonly used tools for analyzing the dataset of autism are FMRI, EEG, and more recently "eye tracking". A preliminary study on eye tracking trajectories of patients studied, showed a rudimentary statistical analysis (principal component analysis) provides interesting results on the statistical parameters that are studied such as the time spent in a region of interest. Another study, involving tools from Euclidean geometry and non-Euclidean, the trajectory of eye patients also showed interesting results. In this research, need confirm the results of the preliminary study but also going forward in understanding the processes involved in these experiments. Two tracks are followed, first will concern with the development of classifiers based on statistical data already provided by the system "eye tracking", second will be more focused on finding new descriptors from the eye trajectories. In this paper, study used K-mean with Vector Measure Constructor Method (VMCM). In addition, briefly reflect used other method support vector machine (SVM) technique. The methods are playing important role to classify the people with and without autism specter disorder. The research paper is comparative study between these two methods.
It is the regression analysis is the foundation stone of knowledge of statistics , which mostly depends on the ordinary least square method , but as is well known that the way the above mentioned her several conditions to operate accurately and the results can be unreliable , add to that the lack of certain conditions make it impossible to complete the work and analysis method and among those conditions are the multi-co linearity problem , and we are in the process of detected that problem between the independent variables using farrar –glauber test , in addition to the requirement linearity data and the lack of the condition last has been resorting to the
... Show MoreThis article aims to provide a bibliometric analysis of intellectual capital research published in the Scopus database from 1956 to 2020 to trace the development of scientific activities that can pave the way for future studies by shedding light on the gaps in the field. The analysis focuses on 638 intellectual capital-related papers published in the Scopus database over 60 years, drawing upon a bibliometric analysis using VOSviewer. This paper highlights the mainstream of the current research in the intellectual capital field, based on the Scopus database, by presenting a detailed bibliometric analysis of the trend and development of intellectual capital research in the past six decades, including journals, authors, countries, inst
... Show MoreDiamond-like carbon, amorphous hydrogenated films forms of carbon, were pretreated from cyclohexane (C6H12) liquid using plasma jet which operates with alternating voltage 7.5kv and frequency 28kHz. The plasma Separates molecules of cyclohexane and Transform it into carbon nanoparticles. The effect of argon flow rate (0.5, 1 and 1.5 L/min) on the optical and chemical bonding properties of the films were investigated. These films were characterized by UV-Visible spectrophotometer, X-ray diffractometer (XRD) Raman spectroscopy and scanning electron microscopy (SEM). The main absorption appears around 296, 299 and 309nm at the three flow rate of argon gas. The value of the optical energy gap is 3.37, 3.55 and 3.68 eV at a different flow rate o
... Show MoreNumeral recognition is considered an essential preliminary step for optical character recognition, document understanding, and others. Although several handwritten numeral recognition algorithms have been proposed so far, achieving adequate recognition accuracy and execution time remain challenging to date. In particular, recognition accuracy depends on the features extraction mechanism. As such, a fast and robust numeral recognition method is essential, which meets the desired accuracy by extracting the features efficiently while maintaining fast implementation time. Furthermore, to date most of the existing studies are focused on evaluating their methods based on clean environments, thus limiting understanding of their potential a
... Show MoreUsing the Internet, nothing is secure and as we are in need of means of protecting our data, the use of passwords has become important in the electronic world. To ensure that there is no hacking and to protect the database that contains important information such as the ID card and banking information, the proposed system stores the username after hashing it using the 256 hash algorithm and strong passwords are saved to repel attackers using one of two methods: -The first method is to add a random salt to the password using the CSPRNG algorithm, then hash it using hash 256 and store it on the website. -The second method is to use the PBKDF2 algorithm, which salts the passwords and extends them (deriving the password) before being ha
... Show MoreImage quality plays a vital role in improving and assessing image compression performance. Image compression represents big image data to a new image with a smaller size suitable for storage and transmission. This paper aims to evaluate the implementation of the hybrid techniques-based tensor product mixed transform. Compression and quality metrics such as compression-ratio (CR), rate-distortion (RD), peak signal-to-noise ratio (PSNR), and Structural Content (SC) are utilized for evaluating the hybrid techniques. Then, a comparison between techniques is achieved according to these metrics to estimate the best technique. The main contribution is to improve the hybrid techniques. The proposed hybrid techniques are consisting of discrete wavel
... Show MoreThis study investigates the impact of spatial resolution enhancement on supervised classification accuracy using Landsat 9 satellite imagery, achieved through pan-sharpening techniques leveraging Sentinel-2 data. Various methods were employed to synthesize a panchromatic (PAN) band from Sentinel-2 data, including dimension reduction algorithms and weighted averages based on correlation coefficients and standard deviation. Three pan-sharpening algorithms (Gram-Schmidt, Principal Components Analysis, Nearest Neighbour Diffusion) were employed, and their efficacy was assessed using seven fidelity criteria. Classification tasks were performed utilizing Support Vector Machine and Maximum Likelihood algorithms. Results reveal that specifi
... Show More