In this work, we are obviously interested in a general solution for the calculation of the image of a single bar in partially coherent illumination. The solution is based on the theory of Hopkins for the formation of images in optical instruments in which it was shown that for all practical cases, the illumination of the object may be considered as due to a self – luminous source placed at the exit pupil of the condenser , and the diffraction integral describing the intensity distribution in the image of a single bar – as an object with half – width (U0 = 8 ) and circular aperture geometry is viewed , which by suitable choice of the coherence parameters (S=0.25,1.0.4.0) can be fitted to the observed distribution in various types of microscope , the aberration were restricted to defocusing and coma upto third – order , the method of integration was Gauss quadrature: The necessary set of integration depends , of course , on the amount of present aberrations and had to be chosen (20) points of Gauss which decrease the computation time to few seconds: The aberration free systems corresponding to the paraxial receiving plane (W20= 0.0) is especially interesting as it predicts diffraction pattern shape. The influence of defocusing is very pronounced and relatively distorts the object , the influence of the off – axis aberration (third – order coma ), in which it was shown that for the high peaks in the images are most noticeable in the region of almost perfect coherence (S=0.25). As (S) is increased from (0.25) to (1.0) there is a pronounced redistribution of intensity, with peaks moving from one side of the image to the other. Calculations were also performed for systems having spherical aberration, but the results are qualitatively similar to an aberration – free defocused system and are omitted, so we will not present any numerical results. A computer program was written in FORTRAN 77 which solved the modified intensity distribution of Hopkins for(U´) dimensionless distance. The advantage of that additional work on this class of problems to investigate the development of more efficient numerical methods, also the reduction in computation time to few seconds for data runs for individual curves of intensity.
Pure cadmium oxide films (CdO) and doped with zinc were prepared at different atomic ratios using a pulsed laser deposition technique using an ND-YAG laser from the targets of the pressed powder capsules. X-ray diffraction measurements showed a cubic-shaped of CdO structure. Another phase appeared, especially in high percentages of zinc, corresponding to the hexagonal structure of zinc. The degree of crystallinity, as well as the crystal size, increased with the increase of the zinc ratio for the used targets. The atomic force microscopy measurements showed that increasing the dopant percentage leads to an increase in the size of the nanoparticles, the particle size distribution was irregular and wide, in addition, to increase the surfac
... Show MoreThis research delves into the role of satirical television programs in shaping the image of Iraqi politicians. The research problem is summarized in the main question: How does satire featured in television programs influence the portrayal of Iraqi politicians? This research adopts a descriptive approach and employs a survey methodology. The primary data collection tool is a questionnaire, complemented by observation and measurement techniques. The study draws upon the framework of cultural cultivation theory as a guiding theoretical foundation. A total of 430 questionnaires were disseminated among respondents who regularly watch satirical programs, selected through a multi-stage random sampling procedure.
Th
The image caption is the process of adding an explicit, coherent description to the contents of the image. This is done by using the latest deep learning techniques, which include computer vision and natural language processing, to understand the contents of the image and give it an appropriate caption. Multiple datasets suitable for many applications have been proposed. The biggest challenge for researchers with natural language processing is that the datasets are incompatible with all languages. The researchers worked on translating the most famous English data sets with Google Translate to understand the content of the images in their mother tongue. In this paper, the proposed review aims to enhance the understanding o
... Show MoreIn this research a proposed technique is used to enhance the frame difference technique performance for extracting moving objects in video file. One of the most effective factors in performance dropping is noise existence, which may cause incorrect moving objects identification. Therefore it was necessary to find a way to diminish this noise effect. Traditional Average and Median spatial filters can be used to handle such situations. But here in this work the focus is on utilizing spectral domain through using Fourier and Wavelet transformations in order to decrease this noise effect. Experiments and statistical features (Entropy, Standard deviation) proved that these transformations can stand to overcome such problems in an elegant way.
... Show MoreRecently, Image enhancement techniques can be represented as one of the most significant topics in the field of digital image processing. The basic problem in the enhancement method is how to remove noise or improve digital image details. In the current research a method for digital image de-noising and its detail sharpening/highlighted was proposed. The proposed approach uses fuzzy logic technique to process each pixel inside entire image, and then take the decision if it is noisy or need more processing for highlighting. This issue is performed by examining the degree of association with neighboring elements based on fuzzy algorithm. The proposed de-noising approach was evaluated by some standard images after corrupting them with impulse
... Show MoreIn this paper, we introduce a DCT based steganographic method for gray scale images. The embedding approach is designed to reach efficient tradeoff among the three conflicting goals; maximizing the amount of hidden message, minimizing distortion between the cover image and stego-image,and maximizing the robustness of embedding. The main idea of the method is to create a safe embedding area in the middle and high frequency region of the DCT domain using a magnitude modulation technique. The magnitude modulation is applied using uniform quantization with magnitude Adder/Subtractor modules. The conducted test results indicated that the proposed method satisfy high capacity, high preservation of perceptual and statistical properties of the steg
... Show MoreIn this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform