Image compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless compression scheme of first stage that corresponding to second stage. The tested results shown are promising in both two stages, that implicilty enhanced the performance of traditional polynomial model in terms of compression ratio , and preresving image quality.
Scams remain among top cybercrime incidents happening around the world. Individuals with high susceptibility to persuasion are considered as risk-takers and prone to be scam victims. Unfortunately, limited number of research is done to investigate the relationship between appeal techniques and individuals' personality thus hindering a proper and effective campaigns that could help to raise awareness against scam. In this study, the impact of fear and rational appeal were examined as well as to identify suitable approach for individuals with high susceptibility to persuasion. To evaluate the approach, pretest and posttest surveys with 3 separate controlled laboratory experiments were conducted. This study found that rational appeal treatm
... Show MoreThis study used a continuous photo-Fenton-like method to remediate textile effluent containing azo dyes especially direct blue 15 dye (DB15). A Eucalyptus leaf extract was used to create iron/copper nanoparticles supported on bentonite for use as catalysts (E@B-Fe/Cu-NPs). Two fixed-bed configurations were studied and compared. The first one involved mixing granular bentonite with E@B-Fe/Cu-NPs (GB- E@B-Fe/Cu-NPs), and the other examined the mixing of E@B-Fe/Cu-NPs with glass beads (glass beads-E@B-Fe/Cu-NPs) and filled to the fixed-bed column. Scanning electron microscopy (SEM), zeta potential, and atomic forces spectroscopy (AFM) techniques were used to characterize the obtained particles (NPs). The effect of flow rate and DB15 concent
... Show MoreThis research delves into the role of satirical television programs in shaping the image of Iraqi politicians. The research problem is summarized in the main question: How does satire featured in television programs influence the portrayal of Iraqi politicians? This research adopts a descriptive approach and employs a survey methodology. The primary data collection tool is a questionnaire, complemented by observation and measurement techniques. The study draws upon the framework of cultural cultivation theory as a guiding theoretical foundation. A total of 430 questionnaires were disseminated among respondents who regularly watch satirical programs, selected through a multi-stage random sampling procedure.
Th
End of the twentieth century witnessed by the technological evolution Convergences between the visual arts aesthetic value and objective representation of the image in the composition of the design of the fabric of new insights and unconventional potential in atypical employment. It is through access to the designs of modern fabrics that address the employment picture footage included several scenes footage from the film, which focuses on research and analytical as a study to demonstrate the elements of the picture and the organization of its rules and how to functioning in the design of fabrics, Thus, it has identified the problem by asking the following: What are the elements of the picture footage and how the functioning of the struct
... Show MoreWe consider the problem of calibrating range measurements of a Light Detection and Ranging (lidar) sensor that is dealing with the sensor nonlinearity and heteroskedastic, range-dependent, measurement error. We solved the calibration problem without using additional hardware, but rather exploiting assumptions on the environment surrounding the sensor during the calibration procedure. More specifically we consider the assumption of calibrating the sensor by placing it in an environment so that its measurements lie in a 2D plane that is parallel to the ground. Then, its measurements come from fixed objects that develop orthogonally w.r.t. the ground, so that they may be considered as fixed points in an inertial reference frame. Moreov
... Show More