The ionospheric characteristics exhibit significant variations with the solar cycle, geomagnetic conditions, seasons, latitudes and even local time. Representation of this research focused on global distribution of electron (Te) and ion temperatures (Ti) during great and severe geomagnetic storms (GMS), their daily and seasonally variation for years (2001-2013), variations of electron and ion temperature during GMS with plasma velocity and geographic latitudes. Finally comparison between observed and predicted Te and Ti get from IRI model during the two kinds of storm selected. Data from satellite Defense Meteorological Satellite Program (DMSP) 850 km altitude are taken for Te, Ti and plasma velocity for different latitudes during great and severe geomagnetic storms from years 2001 to 2013 according to what is available appeared that there is 22 events for severe and great geomagnetic storms happened during years 2001-2005 only from years selected, from maximum solar cycle 23. From data analysis, in general the temperature of the electron is greater than the temperature of the ion, but there are some disturbances happened during the storm time, in the day there is fluctuation in values of Te and Ti with the value of Ti greater than Te. Through the Dst index, Te and Ti do not depend on the strength of the geomagnetic storm. Plasma velocity variation shows the same profile of Te and Ti variation during the storm time and there is a linear relation between (Te) & (Ti) and plasma velocity. The variation of electron and ion temperature with geographic latitude during severe and great storms appears that as the latitude increases the temperature of ions increases reaches its maximum value approximately 80000K at poles.
From comparing the predicted Te and Ti values calculating from IRI model during the great and severe storms with observed values, it’s found that the predicted values from IRI model much less than the observed values and the variation was nonlinear along 24 hours, from this we can conclude that the model must be corrected for Te and Ti for these two kinds of storms.
In this paper, an algorithm for binary codebook design has been used in vector quantization technique, which is used to improve the acceptability of the absolute moment block truncation coding (AMBTC) method. Vector quantization (VQ) method is used to compress the bitmap (the output proposed from the first method (AMBTC)). In this paper, the binary codebook can be engender for many images depending on randomly chosen to the code vectors from a set of binary images vectors, and this codebook is then used to compress all bitmaps of these images. The chosen of the bitmap of image in order to compress it by using this codebook based on the criterion of the average bitmap replacement error (ABPRE). This paper is suitable to reduce bit rates
... Show MoreMany of the key stream generators which are used in practice are LFSR-based in the sense that they produce the key stream according to a rule y = C(L(x)), where L(x) denotes an internal linear bit stream, produced by small number of parallel linear feedback shift registers (LFSRs), and C denotes some nonlinear compression function. In this paper we combine between the output sequences from the linear feedback shift registers with the sequences out from non linear key generator to get the final very strong key sequence
A study to find the optimum separators pressures of separation stations has been performed. Stage separation of oil and gas is accomplished with a series of separators operating at sequentially reduced pressures. Liquid is discharged from a higher-pressure separator into the lower-pressure separator. The set of working separator pressures that yields maximum recovery of liquid hydrocarbon from the well fluid is the optimum set of pressures, which is the target of this work.
A computer model is used to find the optimum separator pressures. The model employs the Peng-Robinson equation of state (Peng and Robinson 1976) for volatile oil. The application of t
Glaucoma is one of the most dangerous eye diseases. It occurs as a result of an imbalance in the drainage and flow of the retinal fluid. Consequently, intraocular pressure is generated, which is a significant risk factor for glaucoma. Intraocular pressure causes progressive damage to the optic nerve head, thus leading to vision loss in the advanced stages. Glaucoma does not give any signs of disease in the early stages, so it is called "the Silent Thief of Sight". Therefore, early diagnosis and treatment of retinal eye disease is extremely important to prevent vision loss. Many articles aim to analyze fundus retinal images and diagnose glaucoma. This review can be used as a guideline to help diagnose glaucoma. It presents 63 artic
... Show MoreThis paper including a gravitational lens time delays study for a general family of lensing potentials, the popular singular isothermal elliptical potential (SIEP), and singular isothermal elliptical density distribution (SIED) but allows general angular structure. At first section there is an introduction for the selected observations from the gravitationally lensed systems. Then section two shows that the time delays for singular isothermal elliptical potential (SIEP) and singular isothermal elliptical density distributions (SIED) have a remarkably simple and elegant form, and that the result for Hubble constant estimations actually holds for a general family of potentials by combining the analytic results with data for the time dela
... Show MoreIn this paper, the survival function has been estimated for the patients with lung cancer using different parametric estimation methods depending on sample for completing real data which explain the period of survival for patients who were ill with the lung cancer based on the diagnosis of disease or the entire of patients in a hospital for a time of two years (starting with 2012 to the end of 2013). Comparisons between the mentioned estimation methods has been performed using statistical indicator mean squares error, concluding that the estimation of the survival function for the lung cancer by using pre-test singles stage shrinkage estimator method was the best . <
... Show MoreThis paper is concerned with introducing and studying the first new approximation operators using mixed degree system and second new approximation operators using mixed degree system which are the core concept in this paper. In addition, the approximations of graphs using the operators first lower and first upper are accurate then the approximations obtained by using the operators second lower and second upper sincefirst accuracy less then second accuracy. For this reason, we study in detail the properties of second lower and second upper in this paper. Furthermore, we summarize the results for the properties of approximation operators second lower and second upper when the graph G is arbitrary, serial 1, serial 2, reflexive, symmetric, tra
... Show MoreAbstract
Objective of this research focused on testing the impact of internal corporate governance instruments in the management of working capital and the reflection of each of them on the Firm performance. For this purpose, four main hypotheses was formulated, the first, pointed out its results to a significant effect for each of corporate major shareholders ownership and Board of Directors size on the net working capital and their association with a positive relation. The second, explained a significant effect of net working capital on the economic value added, and their link inverse relationship, while the third, explored a significant effect for each of the corporate major shareholders ownershi
... Show MoreWe introduce and discus recent type of fibrewise topological spaces, namely fibrewise bitopological spaces, Also, we introduce the concepts of fibrewise closed bitopological spaces, fibrewise open bitopological spaces, fibrewise locally sliceable bitopological spaces and fibrewise locally sectionable bitopological spaces. Furthermore, we state and prove several propositions concerning with these concepts.
In this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho
... Show More