Digital tampering identification, which detects picture modification, is a significant area of image analysis studies. This area has grown with time with exceptional precision employing machine learning and deep learning-based strategies during the last five years. Synthesis and reinforcement-based learning techniques must now evolve to keep with the research. However, before doing any experimentation, a scientist must first comprehend the current state of the art in that domain. Diverse paths, associated outcomes, and analysis lay the groundwork for successful experimentation and superior results. Before starting with experiments, universal image forensics approaches must be thoroughly researched. As a result, this review of various methodologies in the field was created. Unlike previous studies that focused on picture splicing or copy-move detection, this study intends to investigate the universal type-independent strategies required to identify image tampering. The work provided analyses and evaluates several universal techniques based on resampling, compression, and inconsistency-based detection. Journals and datasets are two examples of resources beneficial to the academic community. Finally, a future reinforcement learning model is proposed.
The widespread use of the Internet of things (IoT) in different aspects of an individual’s life like banking, wireless intelligent devices and smartphones has led to new security and performance challenges under restricted resources. The Elliptic Curve Digital Signature Algorithm (ECDSA) is the most suitable choice for the environments due to the smaller size of the encryption key and changeable security related parameters. However, major performance metrics such as area, power, latency and throughput are still customisable and based on the design requirements of the device.
The present paper puts forward an enhancement for the throughput performance metric by p
... Show MoreCognitive radios have the potential to greatly improve spectral efficiency in wireless networks. Cognitive radios are considered lower priority or secondary users of spectrum allocated to a primary user. Their fundamental requirement is to avoid interference to potential primary users in their vicinity. Spectrum sensing has been identified as a key enabling functionality to ensure that cognitive radios would not interfere with primary users, by reliably detecting primary user signals. In addition, reliable sensing creates spectrum opportunities for capacity increase of cognitive networks. One of the key challenges in spectrum sensing is the robust detection of primary signals in highly negative signal-to-noise regimes (SNR).In this paper ,
... Show Moreteen sites Baghdad are made. The sites are divided into two groups, one in Karkh and the other in Rusafa. Assessing the underground conditions can be occurred by drilling vertical holes called exploratory boring into the ground, obtaining soil (disturbed and undisturbed) samples, and testing these samples in a laboratory (civil engineering laboratory /University of Baghdad). From disturbed, the tests involved the grain size analysis and then classified the soil, Atterberg limit, chemical test (organic content, sulphate content, gypsum content and chloride content). From undisturbed samples, the test involved the consolidation test (from this test, the following parameters can be obtained: initial void ratio eo, compression index cc, swel
... Show MoreMachine learning (ML) is a key component within the broader field of artificial intelligence (AI) that employs statistical methods to empower computers with the ability to learn and make decisions autonomously, without the need for explicit programming. It is founded on the concept that computers can acquire knowledge from data, identify patterns, and draw conclusions with minimal human intervention. The main categories of ML include supervised learning, unsupervised learning, semisupervised learning, and reinforcement learning. Supervised learning involves training models using labelled datasets and comprises two primary forms: classification and regression. Regression is used for continuous output, while classification is employed
... Show MoreSocial Networking has dominated the whole world by providing a platform of information dissemination. Usually people share information without knowing its truthfulness. Nowadays Social Networks are used for gaining influence in many fields like in elections, advertisements etc. It is not surprising that social media has become a weapon for manipulating sentiments by spreading disinformation. Propaganda is one of the systematic and deliberate attempts used for influencing people for the political, religious gains. In this research paper, efforts were made to classify Propagandist text from Non-Propagandist text using supervised machine learning algorithms. Data was collected from the news sources from July 2018-August 2018. After annota
... Show MoreBackground: This study evaluated the influence of different fiber formulations incorporation in resin composite on cuspal deflection (CD) of endodontically-treated teeth with mesio-occluso-distal (MOD) cavities. Materials and Methods: Thirty-two freshly extracted maxillary premolar teeth received MOD cavity preparation followed by endodontic treatment using single cone obturation technique, and divided into: Group I: direct composite resin only using a centripetal technique, Group II: direct composite resin with short fiber-reinforced composite (everX Flow), Group III: direct composite resin with leno wave ultra-high molecular weight polyethylene (LWUHMWPE) fibers placed on the cavity floor, and Group IV: direct composite resin with LWUHMWP
... Show MoreA strong sign language recognition system can break down the barriers that separate hearing and speaking members of society from speechless members. A novel fast recognition system with low computational cost for digital American Sign Language (ASL) is introduced in this research. Different image processing techniques are used to optimize and extract the shape of the hand fingers in each sign. The feature extraction stage includes a determination of the optimal threshold based on statistical bases and then recognizing the gap area in the zero sign and calculating the heights of each finger in the other digits. The classification stage depends on the gap area in the zero signs and the number of opened fingers in the other signs as well as
... Show MoreBackground: Beta thalassemia major (β-TM) is an inheritable condition with many complications, especially in children. The blood-borne viral infection was proposed as a risk factor due to the recurrent blood transfusion regimen (hemotherapy) as human parvovirus B19 (B19V). Objective: This study investigated the B19V seroprevalence, DNA presence, B19V viral load, and B19V genotypes in β-TM patients and blood donors. Methods: This is a cross-sectional study incorporating 180 subjects, segregated into three distinct groups each of 60 patients, namely control, β-TM, and β-TM infected with Hepatitis C Virus (HCV). For the B19V prevalence in the studied group, the ELISA technique and real-time PCR were used. The genotyping was follo
... Show MoreAfter a temporary halt to forced thghebr in different cities of Iraq this methodlogy
opeations returned directiy in the areas of political conflict on the ground which are translated
operations and forced displacement violence es they operations aimed at completing the
forced displacement that occurred after the occupation in(2003)which took an upward curve
publicly after these events and some of which are aimed at the liquidation of some provinces
than any demographic diversity of religious or sectarian or alhens and others aimed at
redemographic distribution within the province itself to produce a net sectarian zones as is the
case in Diyala Nineveh and Babylon Baghdad has the epicenter of sectarian violence and th
For several applications, it is very important to have an edge detection technique matching human visual contour perception and less sensitive to noise. The edge detection algorithm describes in this paper based on the results obtained by Maximum a posteriori (MAP) and Maximum Entropy (ME) deblurring algorithms. The technique makes a trade-off between sharpening and smoothing the noisy image. One of the advantages of the described algorithm is less sensitive to noise than that given by Marr and Geuen techniques that considered to be the best edge detection algorithms in terms of matching human visual contour perception.