Information systems and data exchange between government institutions are growing rapidly around the world, and with it, the threats to information within government departments are growing. In recent years, research into the development and construction of secure information systems in government institutions seems to be very effective. Based on information system principles, this study proposes a model for providing and evaluating security for all of the departments of government institutions. The requirements of any information system begin with the organization's surroundings and objectives. Most prior techniques did not take into account the organizational component on which the information system runs, despite the relevance of this feature in the application of access and control methods in terms of security. Based on this, we propose a model for improving security for all departments of government institutions by addressing security issues early in the system's life cycle, integrating them with functional elements throughout the life cycle, and focusing on the system's organizational aspects. The main security aspects covered are system administration, organizational factors, enterprise policy, and awareness and cultural aspects.
Traffic classification is referred to as the task of categorizing traffic flows into application-aware classes such as chats, streaming, VoIP, etc. Most systems of network traffic identification are based on features. These features may be static signatures, port numbers, statistical characteristics, and so on. Current methods of data flow classification are effective, they still lack new inventive approaches to meet the needs of vital points such as real-time traffic classification, low power consumption, ), Central Processing Unit (CPU) utilization, etc. Our novel Fast Deep Packet Header Inspection (FDPHI) traffic classification proposal employs 1 Dimension Convolution Neural Network (1D-CNN) to automatically learn more representational c
... Show MoreIn this research, several estimators concerning the estimation are introduced. These estimators are closely related to the hazard function by using one of the nonparametric methods namely the kernel function for censored data type with varying bandwidth and kernel boundary. Two types of bandwidth are used: local bandwidth and global bandwidth. Moreover, four types of boundary kernel are used namely: Rectangle, Epanechnikov, Biquadratic and Triquadratic and the proposed function was employed with all kernel functions. Two different simulation techniques are also used for two experiments to compare these estimators. In most of the cases, the results have proved that the local bandwidth is the best for all the
... Show MoreAmplitude variation with offset (AVO) analysis is an 1 efficient tool for hydrocarbon detection and identification of elastic rock properties and fluid types. It has been applied in the present study using reprocessed pre-stack 2D seismic data (1992, Caulerpa) from north-west of the Bonaparte Basin, Australia. The AVO response along the 2D pre-stack seismic data in the Laminaria High NW shelf of Australia was also investigated. Three hypotheses were suggested to investigate the AVO behaviour of the amplitude anomalies in which three different factors; fluid substitution, porosity and thickness (Wedge model) were tested. The AVO models with the synthetic gathers were analysed using log information to find which of these is the
... Show MoreThe main objective of this work is to propose a new routing protocol for wireless sensor network employed to serve IoT systems. The routing protocol has to adapt with different requirements in order to enhance the performance of IoT applications. The link quality, node depth and energy are used as metrics to make routing decisions. Comparison with other protocols is essential to show the improvements achieved by this work, thus protocols designed to serve the same purpose such as AODV, REL and LABILE are chosen to compare the proposed routing protocol with. To add integrative and holistic, some of important features are added and tested such as actuating and mobility. These features are greatly required by some of IoT applications and im
... Show MoreThe study relied on data about the health sector in Iraq in 2006 in cooperation with the Ministry of Health and the Central Bureau of Statistics and Information Technology in 2007 Included the estimates of the population distribution of the Baghdad province and the country depending on the population distribution for 1997,evaluate the health sector which included health institutions, and health staff, and other health services. The research Aimis; Measurement an amount and size of the growth of health services (increase and decrease) and the compare of verified in Iraq and Baghdad, and evaluate the effectiveness of the distribution of supplies and health services (physical and human) of the size of the population distribution and
... Show MoreWater supply networks are marred by serious risks of imperceptible pipeline leakage, posing sustainability and performance threats. This article highlights the use of vibratory signal features to get around the drawbacks of traditional methods in a highly detailed framework for leak detection based on CatBoost. demonstrated excellent diagnostic performance and carried out a thorough test performance evaluation on five leakage configurations . The expected system achieved an accuracy of 98.1% (variance (well within x/3% of expected):, beating traditional competitors such as Random Forest (97.3%) and Support Vector Machine (93.8%). For example, the area under the receiver-operating characteristic curve was 0.995, in
... Show MoreThe logistic regression model is one of the oldest and most common of the regression models, and it is known as one of the statistical methods used to describe and estimate the relationship between a dependent random variable and explanatory random variables. Several methods are used to estimate this model, including the bootstrap method, which is one of the estimation methods that depend on the principle of sampling with return, and is represented by a sample reshaping that includes (n) of the elements drawn by randomly returning from (N) from the original data, It is a computational method used to determine the measure of accuracy to estimate the statistics, and for this reason, this method was used to find more accurate estimates. The ma
... Show MoreThis study aims to test whether the institutions listed on the Iraq Stock Exchange have a significant correlation between the level of conservative accounting practice with the level of market share returns during the Coronavirus pandemic period as one of the policies to confront the economic repercussions of the Coronavirus pandemic. Furthermore, the sample included institutions listed on the Iraq Stock Exchange during the 2019 and 2020 years, i.e., the period before the Coronavirus pandemic and during the Coronavirus pandemic for the purpose of comparison. The market value to book value model was used, and the study found that conservative institutions had achieved the highest level of market share prices compared to non-conservat
... Show MoreThis research attempts to evaluate the role of the information system by highlighting its importance in providing date and information to the tax administration the process of tax accounting for those who are subject to income tax whether they are individuals or companies where the effective information system provides accurate and reliable information in a timely manner.
At the theoretical part of the research, the research approaches the problem of the research represented in that whether the information system, applied in the General Commission for Taxes, is capable of achieving its role in reducing the phenomenon of tax evasion. The existence of a set of things which in the Commission may lead to increase tax evasion by taxpa
... Show MoreThe region-based association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease. Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls. To tackle the problem of the sparse distribution, a two-stage approach was proposed in literature: In the first stage, haplotypes are computationally inferred from genotypes, followed by a haplotype coclassification. In the second stage, the association analysis is performed on the inferred haplotype groups. If a haplotype is unevenly distributed between the case and control samples, this haplotype is labeled
... Show More