In this paper, we will present proposed enhance process of image compression by using RLE algorithm. This proposed yield to decrease the size of compressing image, but the original method used primarily for compressing a binary images [1].Which will yield increasing the size of an original image mostly when used for color images. The test of an enhanced algorithm is performed on sample consists of ten BMP 24-bit true color images, building an application by using visual basic 6.0 to show the size after and before compression process and computing the compression ratio for RLE and for the enhanced RLE algorithm.
This study aimed new indications that may clarify the relationships between the total and standard lengths, and the length of the otolith, as well as the thickness and weight of these bones compared to the body weights of two different species of invasive fish in the Iraqi aquatic environment, the common carp
The demand for electronic -passport photo ( frontal facial) images has grown rapidly. It now extends to Electronic Government (E-Gov) applications such as social benefits driver's license, e-passport, and e-visa . With the COVID 19 (coronavirus disease ), facial (formal) images are becoming more widely used and spreading quickly, and are being used to verify an individual's identity, but unfortunately that comes with insignificant details of constant background which leads to huge byte consumption that affects storage space and transmission, where the optimal solution that aims to curtail data size using compression techniques that based on exploiting image redundancy(s) efficiently.
We observed strong nonlinear absorption in the CdS nanoparticles of dimension in the range 50-100 nm when irradiant with femtosecond pulsed laser at 800 nm and 120 GW/cm 2 irradiance intensity. The repetition rate and average power were 250 kHz and
Support Vector Machines (SVMs) are supervised learning models used to examine data sets in order to classify or predict dependent variables. SVM is typically used for classification by determining the best hyperplane between two classes. However, working with huge datasets can lead to a number of problems, including time-consuming and inefficient solutions. This research updates the SVM by employing a stochastic gradient descent method. The new approach, the extended stochastic gradient descent SVM (ESGD-SVM), was tested on two simulation datasets. The proposed method was compared with other classification approaches such as logistic regression, naive model, K Nearest Neighbors and Random Forest. The results show that the ESGD-SVM has a
... Show MoreIn this research, annealed nanostructured ZnO catalyst water putrefaction system was built using sun light and different wavelength lasers as stimulating light sources to enhance photocatalytic degradation activity of methylene blue (MB) dye as a model based on interfacial charges transfer. The structural, crystallite size, morphological, particle size, optical properties and degradation ability of annealed nanostructured ZnO were characterized by X-Ray Diffraction (XRD), Atomic Force Microscopy (AFM) and UV-VIS Spectrometer, respectively. XRD results demonstrated a pure crystalline hexagonal wurtzite with crystalline size equal to 23 nm. From AFM results, the average particle size was 79.25nm. All MB samples and MB with annealed nanostr
... Show MoreA modification to cascaded single-stage distributed amplifier (CSSDA) design by using active inductor is proposed. This modification is shown to render the amplifier suitable for high gain operation in small on-chip area. Microwave office program simulation of the Novel design approach shows that it has performance compatible with the conventional distributed amplifiers but with smaller area. The CSSDA is suitable for optical and satellite communication systems.
This paper proposes a new encryption method. It combines two cipher algorithms, i.e., DES and AES, to generate hybrid keys. This combination strengthens the proposed W-method by generating high randomized keys. Two points can represent the reliability of any encryption technique. Firstly, is the key generation; therefore, our approach merges 64 bits of DES with 64 bits of AES to produce 128 bits as a root key for all remaining keys that are 15. This complexity increases the level of the ciphering process. Moreover, it shifts the operation one bit only to the right. Secondly is the nature of the encryption process. It includes two keys and mixes one round of DES with one round of AES to reduce the performance time. The W-method deals with
... Show More