With the fast progress of information technology and the computer networks, it becomes very easy to reproduce and share the geospatial data due to its digital styles. Therefore, the usage of geospatial data suffers from various problems such as data authentication, ownership proffering, and illegal copying ,etc. These problems can represent the big challenge to future uses of the geospatial data. This paper introduces a new watermarking scheme to ensure the copyright protection of the digital vector map. The main idea of proposed scheme is based on transforming the digital map to frequently domain using the Singular Value Decomposition (SVD) in order to determine suitable areas to insert the watermark data. The digital map is separated into the isolated parts.Watermark data are embedded within the nominated magnitudes in each part when satisfied the definite criteria. The efficiency of proposed watermarking scheme is assessed within statistical measures based on two factors which are fidelity and robustness. Experimental results demonstrate the proposed watermarking scheme representing ideal trade off for disagreement issue between distortion amount and robustness. Also, the proposed scheme shows robust resistance for many kinds of attacks.
Improved environmental protection requires better education and training of regulations which manage different available activities in petroleum operations.
Data from industry is very important to ensure that the existing or suggested regulations are based on accurate scientific information and that they contribute to best environmental protection without adding useless restrictions to this industry.
This research deals with an overview of major environmental regulations and issues facing the petroleum industry.
Many issues which regulate drilling and production activities that may pollute surface water, clean up of existing hazardous waste sites, storage and management of various chemicals and other important aspects are listed
This study aims to identify the most important legislatures and legal frameworks pertaining to advertisement for children. It focuses on the western approach, which is characterized by the variety of its perspectives in presenting issues and in identifying problems. However, if studies show that there is a certain awareness about the advertisement impact on children, it is obvious that most of legislatures reject the laws restricting the broadcast advertising spots intended for children under 12 years of age, with the exception of the Swedish and the Canadian province of Quebec experiences, which opted for total ban on advertising spots broadcast messages targeting children. |
In this paper, we introduce the notation of the soft bornological group to solve the problem of boundedness for the soft group. We combine soft set theory with bornology space to produce a new structure which is called soft bornological group. So that both the product and inverse maps are soft bounded. As well as, we study the actions of the soft bornological group on the soft bornological sets. The aim soft bornological set is to partition into orbital classes by acting soft bornological group on the soft bornological set. In addition, we explain the centralizer, normalizer, and stabilizer in details. The main important results are to prove that the product of soft bornological groups is soft bornol
... Show MoreThe ligand Schiff base [(E)-3-(2-hydroxy-5-methylbenzylideneamino)- 1- phenyl-1H-pyrazol-5(4H) –one] with some metals ion as Mn(II); Co(II); Ni(II); Cu(II); Cd(II) and Hg(II) complexes have been preparation and characterized on the basic of mass spectrum for L, elemental analyses, FTIR, electronic spectral, magnetic susceptibility, molar conductivity measurement and functions thermodynamic data study (∆H°, ∆S° and ∆G°). Results of conductivity indicated that all complexes were non electrolytes. Spectroscopy and other analytical studies reveal distorted octahedral geometry for all complexes. The antibacterial activity of the ligand and preparers metal complexes was also studied against gram and negative bacteria.
Frequent data in weather records is essential for forecasting, numerical model development, and research, but data recording interruptions may occur for various reasons. So, this study aims to find a way to treat these missing data and know their accuracy by comparing them with the original data values. The mean method was used to treat daily and monthly missing temperature data. The results show that treating the monthly temperature data for the stations (Baghdad, Hilla, Basra, Nasiriya, and Samawa) in Iraq for all periods (1980-2020), the percentage for matching between the original and the treating values did not exceed (80%). So, the period was divided into four periods. It was noted that most of the congruence values increased, re
... Show MoreThis paper aims at the analytical level to know the security topics that were used with data journalism, and the expression methods used in the statements of the Security Media Cell, as well as to identify the means of clarification used in data journalism. About the Security Media Cell, and the methods preferred by the public in presenting press releases, especially determining the strength of the respondents' attitude towards the data issued by the Security Media Cell. On the Security Media Cell, while the field study included the distribution of a questionnaire to the public of Baghdad Governorate. The study reached several results, the most important of which is the interest of the security media cell in presenting its data in differ
... Show MoreDiabetes is one of the increasing chronic diseases, affecting millions of people around the earth. Diabetes diagnosis, its prediction, proper cure, and management are compulsory. Machine learning-based prediction techniques for diabetes data analysis can help in the early detection and prediction of the disease and its consequences such as hypo/hyperglycemia. In this paper, we explored the diabetes dataset collected from the medical records of one thousand Iraqi patients. We applied three classifiers, the multilayer perceptron, the KNN and the Random Forest. We involved two experiments: the first experiment used all 12 features of the dataset. The Random Forest outperforms others with 98.8% accuracy. The second experiment used only five att
... Show MoreCloud computing (CC) is a fast-growing technology that offers computers, networking, and storage services that can be accessed and used over the internet. Cloud services save users money because they are pay-per-use, and they save time because they are on-demand and elastic, a unique aspect of cloud computing. However, several security issues must be addressed before users store data in the cloud. Because the user will have no direct control over the data that has been outsourced to the cloud, particularly personal and sensitive data (health, finance, military, etc.), and will not know where the data is stored, the user must ensure that the cloud stores and maintains the outsourced data appropriately. The study's primary goals are to mak
... Show MoreDatabase is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG r
... Show MoreIn some cases, researchers need to know the causal effect of the treatment in order to know the extent of the effect of the treatment on the sample in order to continue to give the treatment or stop the treatment because it is of no use. The local weighted least squares method was used to estimate the parameters of the fuzzy regression discontinuous model, and the local polynomial method was used to estimate the bandwidth. Data were generated with sample sizes (75,100,125,150 ) in repetition 1000. An experiment was conducted at the Innovation Institute for remedial lessons in 2021 for 72 students participating in the institute and data collection. Those who used the treatment had an increase in their score after
... Show More