In this research work, some low complexity and efficient cryptanalysis approaches are proposed to decrypt password (encryption keys). Passwords are still one of the most common means of securing computer systems. Most organizations rely on password authentication systems, and therefore, it is very important for them to enforce their users to have strong passwords. They usually ignore the importance of usability of the password for the users. The more complex they are the more they frustrate users and they end up with some coping strategies such as adding “123” at the end of their passwords or repeating a word to make their passwords longer, which reduces the security of the password, and more importantly there is no scientific basis for these password creation policies to make sure that passwords that are created based on these rules are resistance against real attacks. The current research work describes different password creation policies and password checkers that try to help users create strong passwords and addresses their issues. Metrics for password strength are explored in this research and efficient approaches to calculate these metrics for password distributions are introduced. Furthermore, efficient technique to estimate password strength based on its likelihood of being cracked by an attacker is described. In addition, a tool called PAM has been developed and explained in details in this paper to help users have strong passwords using these metrics; PAM is a password analyzer and modifier.
In this research, we use fuzzy nonparametric methods based on some smoothing techniques, were applied to real data on the Iraqi stock market especially the data about Baghdad company for soft drinks for the year (2016) for the period (1/1/2016-31/12/2016) .A sample of (148) observations was obtained in order to construct a model of the relationship between the stock prices (Low, high, modal) and the traded value by comparing the results of the criterion (G.O.F.) for three techniques , we note that the lowest value for this criterion was for the K-Nearest Neighbor at Gaussian function .
The process of combining the significant information from a series of images into a single image called image sharpening or image fusing, where the resultant fused image will be having more spatial and spectral information than any of the input images. in this research two images of the same place in different spatial resolution have been used the first one was panchromatic and the second image was multispectral with spatial resolution 0.5m and 2 m respectively. These images were captured by world view-2 sensor. This research resent four pan sharpening methods like (HSV, Brovey (color normalizes) , Gram shmidt and PCA)these methods were used to combine the adopted images to get multispectral image
... Show MorePsychological research centers help indirectly contact professionals from the fields of human life, job environment, family life, and psychological infrastructure for psychiatric patients. This research aims to detect job apathy patterns from the behavior of employee groups in the University of Baghdad and the Iraqi Ministry of Higher Education and Scientific Research. This investigation presents an approach using data mining techniques to acquire new knowledge and differs from statistical studies in terms of supporting the researchers’ evolving needs. These techniques manipulate redundant or irrelevant attributes to discover interesting patterns. The principal issue identifies several important and affective questions taken from
... Show MoreImage compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com
... Show MoreThe petroleum sector has a significant influence on the development of multiphase detection sensor techniques; to separate the crude oil from water, the crude oil tank is used. In this paper, a measuring system using a simple and low cost two parallel plate capacitance sensor is designed and implemented based on a Micro controlled embedded system plus PC to automatically identify the (gas/oil) and (oil/water) dynamic multi-interface in the crude oil tank. The Permittivity differences of two-phase liquids are used to determine the interface of them by measuring the relative changes of the sensor’s capacitance when passes through the liquid’s interface. The experiment results to determine the liquid’s interface is sa
... Show MoreThe city of Samawah is one of the most important cities which emerged in the poverty area within the poverty map produced by the Ministry of Planning, despite being an important provincial centre. Although it has great development potentials, it was neglected for more than 50 years,. This dereliction has caused a series of negative accumulations at the urban levels (environmental, social and economic). Therefore, the basic idea of this research is to detect part of these challenges that are preventing growth and development of the city. The methodology of the research is to extrapolate the reality with the analysis of the results, data and environmental impact assessment of the projec
In the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.
The Machine learning methods, which are one of the most important branches of promising artificial intelligence, have great importance in all sciences such as engineering, medical, and also recently involved widely in statistical sciences and its various branches, including analysis of survival, as it can be considered a new branch used to estimate the survival and was parallel with parametric, nonparametric and semi-parametric methods that are widely used to estimate survival in statistical research. In this paper, the estimate of survival based on medical images of patients with breast cancer who receive their treatment in Iraqi hospitals was discussed. Three algorithms for feature extraction were explained: The first principal compone
... Show MoreCohesion is well known as the study of the relationships, whether grammatical and/or lexical, between the different elements of a particular text by the use of what are commonly called 'cohesive devices'. These devices bring connectivity and bind a text together. Besides, the nature and the amount of such cohesive devices usually affect the understanding of that text in the sense of making it easier to comprehend. The present study is intendedto examine the use of grammatical cohesive devicesin relation to narrative techniques. The story of Joseph from the Holy Quran has been selected to be examined by using Halliday and Hasan's Model of Cohesion (1976, 1989). The aim of the study is to comparatively examine to what extent the type
... Show More