Spider veins are a common aesthetic problem mainly in females , the conventional method of treatment is by microsclerotherapy (injections) but laser therapy has become increasingly efficacious and a convenient method for treatment. The present study was performed to investigate the effectiveness and safety of pulsed diode laser (810nm) by doing thermal photocoagulation. Ten patients with lower limbs spider veins were included in this prospective study. They were treated with a repetitive pulsed diode laser in non contact technique using the following laser parameters (wave length 810nm,power 1 W ,pulse duration 0.1 s., pulse interval 0.5 s, spot diameter 4mm ,power density 7.9 W/cm2). Laser therapy was performed on day zero and day fourteen. Clinical assessments were carried out before laser therapy and immediately after the first laser therapy, after 2 weeks, 4 weeks, and 6 weeks. The procedure was performed without using any type of anesthesia. Results showed that there was a remarkable improvement for all patients after the first treatment and after the second treatment. Only six patients showed a complete disappearance of the spider veins with absent peroperative and postoperative pain and complications, within short operative time in comparison with the microsclerotherapy. So the repetitive pulsed diode laser therapy (810nm) is an effective and safe treatment option for lower limbs spider veins. It's recommended that larger numbers of cases to be done to allow for a proper statistical analysis and a longer follow up period to assess the recurrence rate.
In this study, a fast block matching search algorithm based on blocks' descriptors and multilevel blocks filtering is introduced. The used descriptors are the mean and a set of centralized low order moments. Hierarchal filtering and MAE similarity measure were adopted to nominate the best similar blocks lay within the pool of neighbor blocks. As next step to blocks nomination the similarity of the mean and moments is used to classify the nominated blocks and put them in one of three sub-pools, each one represents certain nomination priority level (i.e., most, less & least level). The main reason of the introducing nomination and classification steps is a significant reduction in the number of matching instances of the pixels belong to the c
... Show MoreTo maintain the security and integrity of data, with the growth of the Internet and the increasing prevalence of transmission channels, it is necessary to strengthen security and develop several algorithms. The substitution scheme is the Playfair cipher. The traditional Playfair scheme uses a small 5*5 matrix containing only uppercase letters, making it vulnerable to hackers and cryptanalysis. In this study, a new encryption and decryption approach is proposed to enhance the resistance of the Playfair cipher. For this purpose, the development of symmetric cryptography based on shared secrets is desired. The proposed Playfair method uses a 5*5 keyword matrix for English and a 6*6 keyword matrix for Arabic to encrypt the alphabets of
... Show MoreThere are several oil reservoirs that had severe from a sudden or gradual decline in their production due to asphaltene precipitation inside these reservoirs. Asphaltene deposition inside oil reservoirs causes damage for permeability and skin factor, wettability alteration of a reservoir, greater drawdown pressure. These adverse changing lead to flow rate reduction, so the economic profit will drop. The aim of this study is using local solvents: reformate, heavy-naphtha and binary of them for dissolving precipitated asphaltene inside the oil reservoir. Three samples of the sand pack had been prepared and mixed with a certain amount of asphaltene. Permeability of these samples calculated before and after mixed with asphaltenes. Then, the
... Show MoreIn this paper, we designed a new efficient stream cipher cryptosystem that depend on a chaotic map to encrypt (decrypt) different types of digital images. The designed encryption system passed all basic efficiency criteria (like Randomness, MSE, PSNR, Histogram Analysis, and Key Space) that were applied to the key extracted from the random generator as well as to the digital images after completing the encryption process.
In This paper, sky radio emission background level associated with radio storm burst for the Sun and Jupiter is determined at frequency (20.1 MHz). The observation data for radio Jove telescope for the Sun and Jupiter radio storm observations data are loaded from NASA radio Jove telescope website, the data of Sunspot number are loaded from National Geophysical Data Center, (NGDC). Two radio Jove stations [(Sula, MT), (Lamy, NM)] are chose from data website for these huge observations data. For the Sun, twelve figures are used to determine the relation between radio background emission, and the daily Sunspot number. For Jupiter a twenty four figures are used to determine the relation between radio background emission and diffraction betwe
... Show MoreIn despite of the expansion of using the dummy variables as a explanatory variables, but their using as a dependent variables is still limited, and the reason of that may be return to may problems when using dummy variables as a dependent variables. the study aimed to using the quality Response Models to Measuring Efficiency of cows farms by random sample including (19) farm from (Abi gherak district). The study estimating the transcendental logarithmic production function by using stochastic frontier Analysis (SFA) to interpret the relation between the return achieved from the cows farms as a dependent variables and each of labor and capital as an independent variables. the function indicates that increasing in labor by (100%) will
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show More