A simulation study of using 2D tomography to reconstruction a 3D object is presented. The 2D Radon transform is used to create a 2D projection for each slice of the 3D object at different heights. The 2D back-projection and the Fourier slice theorem methods are used to reconstruction each 2D projection slice of the 3D object. The results showed the ability of the Fourier slice theorem method to reconstruct the general shape of the body with its internal structure, unlike the 2D Radon method, which was able to reconstruct the general shape of the body only because of the blurring artefact, Beside that the Fourier slice theorem could not remove all blurring artefact, therefore, this research, suggested the threshold technique to eliminate the excessive points due to the blurring artefact.
An environmentally begnin second derivative spectrometric approach was developed for the estimation of the dissociation constants pKa(s) of metformin, a common anti-diabetic drug. The ultraviolet spectra of the aqueous solution of metformin were measured at different acidities, then the second derivative of each spectrum was graphed. The overlaid second derivative graphs exhibited two isobestic points at 225.5 nm and 244 nm pointing out to the presence of two dissociation constants for metformin pKa1 and pKa2, respectively. The method was validated by evaluating the reproducibility of the acquired results by comparing the estimated values of the dissociation constants of two different strategies that show excellent matching. As we
... Show MoreThis study concerns the removal of a trihydrate antibiotic (Amoxicillin) from synthetically contaminated water by adsorption on modified bentonite. The bentonite was modified using hexadecyl trimethyl ammonium bromide (HTAB), which turned it from a hydrophilic to a hydrophobic material. The effects of different parameters were studied in batch experiments. These parameters were contact time, solution pH, agitation speed, initial concentration (C0) of the contaminant, and adsorbent dosage. Maximum removal of amoxicillin (93 %) was achieved at contact time = 240 min, pH = 10, agitation speed = 200 rpm, initial concentration = 30 ppm, and adsorbent dosage = 3 g bentonite per 1L of pollutant solution. The characterization of the adsorbent, modi
... Show MoreTV medium derives its formal shape from the technological development taking place in all scientific fields, which are creatively fused in the image of the television, which consists mainly of various visual levels and formations. But by the new decade of the second millennium, the television medium and mainly (drama) became looking for that paradigm shift in the aesthetic formal innovative fields and the advanced expressive performative fields that enable it to develop in treating what was impossible to visualize previously. In the meantime, presenting what is new and innovative in the field of unprecedented and even the familiar objective and intellectual treatments. Thus the TV medium has sought for work
... Show MoreThe present study examines critically the discursive representation of Arab immigrants in selected American news channels. To achieve the aim of this study, twenty news subtitles have been exacted from ABC and NBC channels. The selected news subtitles have been analyzed within van Dijk’s (2000) critical discourse analysis framework. Ten discourse categories have been examined to uncover the image of Arab immigrants in the American news channels. The image of Arab immigrants has been examined in terms of five ideological assumptions including "us vs. them", "ingroup vs. outgroup", "victims vs. agents", "positive self-presentation vs. negative other-presentation", and "threat vs. non-threat". Analysis of data reveals that Arab immig
... Show MoreThe deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Conv
... Show MoreImage compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eye
... Show More