BACKGROUND: The humeral shaft fractures have a good rate of union, despite this fact, still there is a significant rate of nonunion after nonoperative treatment and more often after operative treatment. AIM: The aim of the study is to evaluate the autogenous onlay graft with compression plate for treatment of persistent humeral shaft non-union with failed previous surgery both radiological and functional outcome. MATERIALS AND METHODS: A prospective study on twenty patients having persistent aseptic non-union age between 20 and 60 years old, after failed surgical treatment of fractures humeral shaft in Al-Zahra teaching and Al-Kindy teaching hospitals, while infected nonunion, diabetes mellitus, secondary metastasis, smoking, alcoholism, and patients on long medication with corticosteroid were excluded from the study. All our patients were treated with corticocancellous onlay bone grafting harvesting from the ipsilateral upper tibia and compression plating (graft parallel to plate) and follow-up for at least 18 months post-operative to evaluate both radiology and functional using Mayo elbow performance index. RESULTS: All the patients ended with a solid union without hardware failure, and no one patient needs further surgery, even with significant resorption of the graft, there is a good chance of graft re-calcification and solid union with good to excellent functional outcome. CONCLUSION: Very successful solid union results achieve in those patients with established aseptic nonunion and pseudoarthrosis of the humerus.
The great scientific progress has led to widespread Information as information accumulates in large databases is important in trying to revise and compile this vast amount of data and, where its purpose to extract hidden information or classified data under their relations with each other in order to take advantage of them for technical purposes.
And work with data mining (DM) is appropriate in this area because of the importance of research in the (K-Means) algorithm for clustering data in fact applied with effect can be observed in variables by changing the sample size (n) and the number of clusters (K)
... Show MoreIn this paper, an exact stiffness matrix and fixed-end load vector for nonprismatic beams having parabolic varying depth are derived. The principle of strain energy is used in the derivation of the stiffness matrix.
The effect of both shear deformation and the coupling between axial force and the bending moment are considered in the derivation of stiffness matrix. The fixed-end load vector for elements under uniformly distributed or concentrated loads is also derived. The correctness of the derived matrices is verified by numerical examples. It is found that the coupling effect between axial force and bending moment is significant for elements having axial end restraint. It was found that the decrease in bending moment was
in the
Information security is a crucial factor when communicating sensitive information between two parties. Steganography is one of the most techniques used for this purpose. This paper aims to enhance the capacity and robustness of hiding information by compressing image data to a small size while maintaining high quality so that the secret information remains invisible and only the sender and recipient can recognize the transmission. Three techniques are employed to conceal color and gray images, the Wavelet Color Process Technique (WCPT), Wavelet Gray Process Technique (WGPT), and Hybrid Gray Process Technique (HGPT). A comparison between the first and second techniques according to quality metrics, Root-Mean-Square Error (RMSE), Compression-
... Show MoreIn this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).
A new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show MoreIn this paper, we will present proposed enhance process of image compression by using RLE algorithm. This proposed yield to decrease the size of compressing image, but the original method used primarily for compressing a binary images [1].Which will yield increasing the size of an original image mostly when used for color images. The test of an enhanced algorithm is performed on sample consists of ten BMP 24-bit true color images, building an application by using visual basic 6.0 to show the size after and before compression process and computing the compression ratio for RLE and for the enhanced RLE algorithm.
Fractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually.
<span>Digital audio is required to transmit large sizes of audio information through the most common communication systems; in turn this leads to more challenges in both storage and archieving. In this paper, an efficient audio compressive scheme is proposed, it depends on combined transform coding scheme; it is consist of i) bi-orthogonal (tab 9/7) wavelet transform to decompose the audio signal into low & multi high sub-bands, ii) then the produced sub-bands passed through DCT to de-correlate the signal, iii) the product of the combined transform stage is passed through progressive hierarchical quantization, then traditional run-length encoding (RLE), iv) and finally LZW coding to generate the output mate bitstream.
... Show More