Currently, with the huge increase in modern communication and network applications, the speed of transformation and storing data in compact forms are pressing issues. Daily an enormous amount of images are stored and shared among people every moment, especially in the social media realm, but unfortunately, even with these marvelous applications, the limited size of sent data is still the main restriction's, where essentially all these applications utilized the well-known Joint Photographic Experts Group (JPEG) standard techniques, in the same way, the need for construction of universally accepted standard compression systems urgently required to play a key role in the immense revolution. This review is concerned with Differential pulse code modulation (DPCM) and pixel-based techniques, where the spatial domain is exploited to compress images efficiently in terms of compression performance and preserving quality. The new pixel-based method overcomes predictive coding constraints with fewer residues and higher compression ratios.
Plagiarism is becoming more of a problem in academics. It’s made worse by the ease with which a wide range of resources can be found on the internet, as well as the ease with which they can be copied and pasted. It is academic theft since the perpetrator has ”taken” and presented the work of others as his or her own. Manual detection of plagiarism by a human being is difficult, imprecise, and time-consuming because it is difficult for anyone to compare their work to current data. Plagiarism is a big problem in higher education, and it can happen on any topic. Plagiarism detection has been studied in many scientific articles, and methods for recognition have been created utilizing the Plagiarism analysis, Authorship identification, and
... Show MoreIn the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.
Cloud computing is an interesting technology that allows customers to have convenient, on-demand network connectivity based on their needs with minimal maintenance and contact between cloud providers. The issue of security has arisen as a serious concern, particularly in the case of cloud computing, where data is stored and accessible via the Internet from a third-party storage system. It is critical to ensure that data is only accessible to the appropriate individuals and that it is not stored in third-party locations. Because third-party services frequently make backup copies of uploaded data for security reasons, removing the data the owner submits does not guarantee the removal of the data from the cloud. Cloud data storag
... Show MoreRecent reports of new pollution issues brought on by the presence of medications in the aquatic environment have sparked a great deal of interest in studies aiming at analyzing and mitigating the associated environmental risks, as well as the extent of this contamination. The main sources of pharmaceutical contaminants in natural lakes and rivers include clinic sewage, pharmaceutical production wastewater, and sewage from residences that have been contaminated by drug users' excretions. In evaluating the health of rivers, pharmaceutical pollutants have been identified as one of the emerging pollutants. The previous studies showed that the contaminants in pharmaceuticals that are widely used are non-steroidal anti-inflammatory drugs, ant
... Show MoreCarbon-fiber-reinforced polymer (CFRP) is widely acknowledged as a leading advanced material structure, offering superior properties compared to traditional materials, and has found diverse applications in several industrial sectors, such as that of automobiles, aircrafts, and power plants. However, the production of CFRP composites is prone to fabrication problems, leading to structural defects arising from cycling and aging processes. Identifying these defects at an early stage is crucial to prevent service issues that could result in catastrophic failures. Hence, routine inspection and maintenance are crucial to prevent system collapse. To achieve this objective, conventional nondestructive testing (NDT) methods are utilized to i
... Show MoreThe recent advancements in security approaches have significantly increased the ability to identify and mitigate any type of threat or attack in any network infrastructure, such as a software-defined network (SDN), and protect the internet security architecture against a variety of threats or attacks. Machine learning (ML) and deep learning (DL) are among the most popular techniques for preventing distributed denial-of-service (DDoS) attacks on any kind of network. The objective of this systematic review is to identify, evaluate, and discuss new efforts on ML/DL-based DDoS attack detection strategies in SDN networks. To reach our objective, we conducted a systematic review in which we looked for publications that used ML/DL approach
... Show MoreIn all applications and specially in real time applications, image processing and compression plays in modern life a very important part in both storage and transmission over internet for example, but finding orthogonal matrices as a filter or transform in different sizes is very complex and importance to using in different applications like image processing and communications systems, at present, new method to find orthogonal matrices as transform filter then used for Mixed Transforms Generated by using a technique so-called Tensor Product based for Data Processing, these techniques are developed and utilized. Our aims at this paper are to evaluate and analyze this new mixed technique in Image Compression using the Discrete Wavelet Transfo
... Show MoreIn this paper, a new high-performance lossy compression technique based on DCT is proposed. The image is partitioned into blocks of a size of NxN (where N is multiple of 2), each block is categorized whether it is high frequency (uncorrelated block) or low frequency (correlated block) according to its spatial details, this done by calculating the energy of block by taking the absolute sum of differential pulse code modulation (DPCM) differences between pixels to determine the level of correlation by using a specified threshold value. The image blocks will be scanned and converted into 1D vectors using horizontal scan order. Then, 1D-DCT is applied for each vector to produce transform coefficients. The transformed coefficients will be qua
... Show More