Internet of Things (IoT) contributes to improve the quality of life as it supports many applications, especially healthcare systems. Data generated from IoT devices is sent to the Cloud Computing (CC) for processing and storage, despite the latency caused by the distance. Because of the revolution in IoT devices, data sent to CC has been increasing. As a result, another problem added to the latency was increasing congestion on the cloud network. Fog Computing (FC) was used to solve these problems because of its proximity to IoT devices, while filtering data is sent to the CC. FC is a middle layer located between IoT devices and the CC layer. Due to the massive data generated by IoT devices on FC, Dynamic Weighted Round Robin (DWRR) algorithm was used, which represents a load balancing (LB) algorithm that is applied to schedule and distributes data among fog servers by reading CPU and memory values of these servers in order to improve system performance. The results proved that DWRR algorithm provides high throughput which reaches 3290 req/sec at 919 users. A lot of research is concerned with distribution of workload by using LB techniques without paying much attention to Fault Tolerance (FT), which implies that the system continues to operate even when fault occurs. Therefore, we proposed a replication FT technique called primary-backup replication based on dynamic checkpoint interval on FC. Checkpoint was used to replicate new data from a primary server to a backup server dynamically by monitoring CPU values of primary fog server, so that checkpoint occurs only when the CPU value is larger than 0.2 to reduce overhead. The results showed that the execution time of data filtering process on the FC with a dynamic checkpoint is less than the time spent in the case of the static checkpoint that is independent on the CPU status.
The analysis of the hyperlink structure of the web has led to significant improvements in web information retrieval. This survey study evaluates and analyzes relevant research publications on link analysis in web information retrieval utilizing diverse methods. These factors include the research year, the aims of the research article, the algorithms utilized to complete their study, and the findings received after using the algorithms. The findings revealed that Page Rank, Weighted Page Rank, and Weighted Page Content Rank are extensively employed by academics to properly analyze hyperlinks in web information retrieval. Finally, this paper analyzes the previous studies.
Information pollution is regarded as a big problem facing journalists working in the editing section, whereby journalistic materials face such pollution through their way across the editing pyramid. This research is an attempt to define the concept of journalistic information pollution, and what are the causes and sources of this pollution. The research applied the descriptive research method to achieve its objectives. A questionnaire was used to collect data. The findings indicate that journalists are aware of the existence of information pollution in journalism, and this pollution has its causes and resources.
The research aims to measure, assess and evaluate the efficiency of the directorates of Anbar Municipalities by using the Data Envelopment Analysis method (DEA). This is because the municipality sector is consider an important sector and has a direct contact with the citizen’s life. Provides essential services to citizens. The researcher used a case study method, and the sources of information collection based on data were monthly reports, the research population is represented by the Directorate of Anbar Municipalities, and the research sample consists of 7 municipalities which are different in terms of category and size of different types. The most important conclusion reached by the research i
... Show MoreThis research is concerned with the re-analysis of optical data (the imaginary part of the dielectric function as a function of photon energy E) of a-Si:H films prepared by Jackson et al. and Ferlauto et al. through using nonlinear regression fitting we estimated the optical energy gap and the deviation from the Tauc model by considering the parameter of energy photon-dependence of the momentum matrix element of the p as a free parameter by assuming that density of states distribution to be a square root function. It is observed for films prepared by Jackson et al. that the value of the parameter p for the photon energy range is is close to the value assumed by the Cody model and the optical gap energy is which is also close to the value
... Show MoreData security is an important component of data communication and transmission systems. Its main role is to keep sensitive information safe and integrated from the sender to the receiver. The proposed system aims to secure text messages through two security principles encryption and steganography. The system produced a novel method for encryption using graph theory properties; it formed a graph from a password to generate an encryption key as a weight matrix of that graph and invested the Least Significant Bit (LSB) method for hiding the encrypted message in a colored image within a green component. Practical experiments of (perceptibility, capacity, and robustness) were calculated using similarity measures like PSNR, MSE, and
... Show MoreEverybody is connected with social media like (Facebook, Twitter, LinkedIn, Instagram…etc.) that generate a large quantity of data and which traditional applications are inadequate to process. Social media are regarded as an important platform for sharing information, opinion, and knowledge of many subscribers. These basic media attribute Big data also to many issues, such as data collection, storage, moving, updating, reviewing, posting, scanning, visualization, Data protection, etc. To deal with all these problems, this is a need for an adequate system that not just prepares the details, but also provides meaningful analysis to take advantage of the difficult situations, relevant to business, proper decision, Health, social media, sc
... Show MoreBreast cancer is the second deadliest disease infected women worldwide. For this
reason the early detection is one of the most essential stop to overcomeit dependingon
automatic devices like artificial intelligent. Medical applications of machine learning
algorithmsare mostly based on their ability to handle classification problems,
including classifications of illnesses or to estimate prognosis. Before machine
learningis applied for diagnosis, it must be trained first. The research methodology
which isdetermines differentofmachine learning algorithms,such as Random tree,
ID3, CART, SMO, C4.5 and Naive Bayesto finds the best training algorithm result.
The contribution of this research is test the data set with mis
Information pollution is regarded as a big problem facing journalists working in the editing section, whereby journalistic materials face such pollution through their way across the editing pyramid. This research is an attempt to define the concept of journalistic information pollution, and what are the causes and sources of this pollution. The research applied the descriptive research method to achieve its objectives. A questionnaire was used to collect data. The findings indicate that journalists are aware of the existence of information pollution in journalism, and this pollution has its causes and resources.
The purpose of this research is to enhance the methods of surface seismic data processing and interpretation operations by using the produced information of vertical seismic profile (measured velocity and corridor stack). Sindbad oil field (South of Iraq) is chosen to study goals and it's containing only one well with VSP survey (Snd2) that covering depth from Zubair to Sulaiy Formations and 2D seismic lines of Basrah Survey. The horizons were picked and used with low frequency contents from well data for the construction of low frequency model and it was used with high frequency of VSP to make the high frequency model that compensated to seismic main frequency through inversion process. Seismic inversion technique is performed on post s
... Show MoreThe current paper proposes a new estimator for the linear regression model parameters under Big Data circumstances. From the diversity of Big Data variables comes many challenges that can be interesting to the researchers who try their best to find new and novel methods to estimate the parameters of linear regression model. Data has been collected by Central Statistical Organization IRAQ, and the child labor in Iraq has been chosen as data. Child labor is the most vital phenomena that both society and education are suffering from and it affects the future of our next generation. Two methods have been selected to estimate the parameter
... Show More