Compressing an image and reconstructing it without degrading its original quality is one of the challenges that still exist now a day. A coding system that considers both quality and compression rate is implemented in this work. The implemented system applies a high synthetic entropy coding schema to store the compressed image at the smallest size as possible without affecting its original quality. This coding schema is applied with two transform-based techniques, one with Discrete Cosine Transform and the other with Discrete Wavelet Transform. The implemented system was tested with different standard color images and the obtained results with different evaluation metrics have been shown. A comparison was made with some previous rel
... Show MoreThe mobile services are the most important media between many of telecommunication means such as the Internet and Telephone networks, And thatis be cause of its advantage represented by the high availability and independence of physical location and time,Therefore, the need to protect the mobile information appeared against the changing and the misuse especially with the rapid and wide grow of the mobile network and its wide usage through different types of information such as messages, images and videos. The proposed system uses the watermark as tool to protect images on a mobile device by registering them on a proposed watermarking server. This server allows the owner to protect his images by using invisible wat
... Show MoreImage Fusion is being used to gather important data from such an input image array and to place it in a single output picture to make it much more meaningful & usable than either of the input images. Image fusion boosts the quality and application of data. The accuracy of the image that has fused depending on the application. It is widely used in smart robotics, audio camera fusion, photonics, system control and output, construction and inspection of electronic circuits, complex computer, software diagnostics, also smart line assembling robots. In this paper provides a literature review of different image fusion techniques in the spatial domain and frequency domain, such as averaging, min-max, block substitution, Intensity-Hue-Saturation(IH
... Show MoreHuman skin detection, which usually performed before image processing, is the method of discovering skin-colored pixels and regions that may be of human faces or limbs in videos or photos. Many computer vision approaches have been developed for skin detection. A skin detector usually transforms a given pixel into a suitable color space and then uses a skin classifier to mark the pixel as a skin or a non-skin pixel. A skin classifier explains the decision boundary of the class of a skin color in the color space based on skin-colored pixels. The purpose of this research is to build a skin detection system that will distinguish between skin and non-skin pixels in colored still pictures. This performed by introducing a metric that measu
... Show MoreThe present paper concerns with the problem of estimating the reliability system in the stress – strength model under the consideration non identical and independent of stress and strength and follows Lomax Distribution. Various shrinkage estimation methods were employed in this context depend on Maximum likelihood, Moment Method and shrinkage weight factors based on Monte Carlo Simulation. Comparisons among the suggested estimation methods have been made using the mean absolute percentage error criteria depend on MATLAB program.
The quality of Global Navigation Satellite Systems (GNSS) networks are considerably influenced by the configuration of the observed baselines. Where, this study aims to find an optimal configuration for GNSS baselines in terms of the number and distribution of baselines to improve the quality criteria of the GNSS networks. First order design problem (FOD) was applied in this research to optimize GNSS network baselines configuration, and based on sequential adjustment method to solve its objective functions.
FOD for optimum precision (FOD-p) was the proposed model which based on the design criteria of A-optimality and E-optimality. These design criteria were selected as objective functions of precision, whic
... Show MoreToday with increase using social media, a lot of researchers have interested in topic extraction from Twitter. Twitter is an unstructured short text and messy that it is critical to find topics from tweets. While topic modeling algorithms such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are originally designed to derive topics from large documents such as articles, and books. They are often less efficient when applied to short text content like Twitter. Luckily, Twitter has many features that represent the interaction between users. Tweets have rich user-generated hashtags as keywords. In this paper, we exploit the hashtags feature to improve topics learned