Solar cells has been assembly with electrolytes including I−/I−3 redox duality employ polyacrylonitrile (PAN), ethylene carbonate (EC), propylene carbonate (PC), with double iodide salts of tetrabutylammonium iodide (TBAI) and Lithium iodide (LiI) and iodine (I2) were thoughtful for enhancing the efficiency of the solar cells. The rendering of the solar cells has been examining by alteration the weight ratio of the salts in the electrolyte. The solar cell with electrolyte comprises (60% wt. TBAI/40% wt. LiI (+I2)) display elevated efficiency of 5.189% under 1000 W/m2 light intensity. While the solar cell with electrolyte comprises (60% wt. LiI/40% wt. TBAI (+I2)) display a lower efficiency of 3.189%. The conductivity raises with the raising TBAI salt weight ratio and attains the maximum value of 1.7×10−3 S. cm−1 at room temperature with 60% wt. TBAI, and the lower value of ionic conductivity of 5.27×10−4 S. cm−1 for electrolyte with 40% wt. TBAI. The results display that the conductivity rises with rising temperature. This may be attributed to the extending of the polymer and thereby output the free volume. The alteration in ionic conductivity with temperature obeys the Arrhenius type thermally activated process. The differences in activation energy mightily backup the alteration in the electrical conductivity.
Features is the description of the image contents which could be corner, blob or edge. Corners are one of the most important feature to describe image, therefore there are many algorithms to detect corners such as Harris, FAST, SUSAN, etc. Harris is a method for corner detection and it is an efficient and accurate feature detection method. Harris corner detection is rotation invariant but it isn’t scale invariant. This paper presents an efficient harris corner detector invariant to scale, this improvement done by using gaussian function with different scales. The experimental results illustrate that it is very useful to use Gaussian linear equation to deal with harris weakness.
Background: Expectoration of blood that originated in the lungs or bronchial tubes is a frightening symptom for patients and often is a manifestation of significant and possibly dangerous underlying disease. Tuberculosis was and still one of the common causes followed by bronchiactasis , bronchitis, and lung cancer. Objectives: The aim of this study is to find the frequency of causes of respiratory tract bleeding in 100 patients attending alkindy teaching hospital.Type of the study: : Prospective descriptive observational study Methods of a group of patients consist of one hundred consecutive adult patients, with Lower respiratory tract bleeding are studied. History, physical examination, and a group of selected investigations performed,
... Show MoreChannel estimation and synchronization are considered the most challenging issues in Orthogonal Frequency Division Multiplexing (OFDM) system. OFDM is highly affected by synchronization errors that cause reduction in subcarriers orthogonality, leading to significant performance degradation. The synchronization errors cause two issues: Symbol Time Offset (STO), which produces inter symbol interference (ISI) and Carrier Frequency Offset (CFO), which results in inter carrier interference (ICI). The aim of the research is to simulate Comb type pilot based channel estimation for OFDM system showing the effect of pilot numbers on the channel estimation performance and propose a modified estimation method for STO with less numb
... Show MoreIn this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).
Metaheuristics under the swarm intelligence (SI) class have proven to be efficient and have become popular methods for solving different optimization problems. Based on the usage of memory, metaheuristics can be classified into algorithms with memory and without memory (memory-less). The absence of memory in some metaheuristics will lead to the loss of the information gained in previous iterations. The metaheuristics tend to divert from promising areas of solutions search spaces which will lead to non-optimal solutions. This paper aims to review memory usage and its effect on the performance of the main SI-based metaheuristics. Investigation has been performed on SI metaheuristics, memory usage and memory-less metaheuristics, memory char
... Show MoreIn this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.
Producing pseudo-random numbers (PRN) with high performance is one of the important issues that attract many researchers today. This paper suggests pseudo-random number generator models that integrate Hopfield Neural Network (HNN) with fuzzy logic system to improve the randomness of the Hopfield Pseudo-random generator. The fuzzy logic system has been introduced to control the update of HNN parameters. The proposed model is compared with three state-ofthe-art baselines the results analysis using National Institute of Standards and Technology (NIST) statistical test and ENT test shows that the projected model is statistically significant in comparison to the baselines and this demonstrates the competency of neuro-fuzzy based model to produce
... Show MoreSemantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
In this research a proposed technique is used to enhance the frame difference technique performance for extracting moving objects in video file. One of the most effective factors in performance dropping is noise existence, which may cause incorrect moving objects identification. Therefore it was necessary to find a way to diminish this noise effect. Traditional Average and Median spatial filters can be used to handle such situations. But here in this work the focus is on utilizing spectral domain through using Fourier and Wavelet transformations in order to decrease this noise effect. Experiments and statistical features (Entropy, Standard deviation) proved that these transformations can stand to overcome such problems in an elegant way.
... Show MoreA Multiple System Biometric System Based on ECG Data