In this work a study and calculation of the normal approach between two bodies,
spherical and rough flat surface, had been conducted by the aid of image processing
technique. Four kinds of metals of different work hardening index had been used as a
surface specimens and by capturing images of resolution of 0.006565 mm/pixel a good estimate of the normal approach may be obtained the compression tests had been done in strength of material laboratory in mechanical engineering department, a Monsanto tensometer had been used to conduct the indentation tests. A light section measuring equipment microscope BK 70x50 was used to calculate the surface parameters of the texture profile like standard deviation of asperity peak heights, centre line average, asperity density and the radius of asperities. A Gaussian distribution of asperity peak height was assumed in calculating the theoretical value of the normal approach in the elastic and plastic regions and where compared with those obtained experimentally to verify the obtained results.
In this paper, a new equivalent lumped parameter model is proposed for describing the vibration of beams under the moving load effect. Also, an analytical formula for calculating such vibration for low-speed loads is presented. Furthermore, a MATLAB/Simulink model is introduced to give a simple and accurate solution that can be used to design beams subjected to any moving loads, i.e., loads of any magnitude and speed. In general, the proposed Simulink model can be used much easier than the alternative FEM software, which is usually used in designing such beams. The obtained results from the analytical formula and the proposed Simulink model were compared with those obtained from Ansys R19.0, and very good agreement has been shown. I
... Show MoreA new method presented in this work to detect the existence of hidden
data as a secret message in images. This method must be applyied only on images which have the same visible properties (similar in perspective) where the human eyes cannot detect the difference between them.
This method is based on Image Quality Metrics (Structural Contents
Metric), which means the comparison between the original images and stego images, and determines the size ofthe hidden data. We applied the method to four different images, we detect by this method the hidden data and find exactly the same size of the hidden data.
A new de-blurring technique was proposed in order to reduced or remove the blur in the images. The proposed filter was designed from the Lagrange interpolation calculation with adjusted by fuzzy rules and supported by wavelet decomposing technique. The proposed Wavelet Lagrange Fuzzy filter gives good results for fully and partially blurring region in images.
Semantic segmentation is an exciting research topic in medical image analysis because it aims to detect objects in medical images. In recent years, approaches based on deep learning have shown a more reliable performance than traditional approaches in medical image segmentation. The U-Net network is one of the most successful end-to-end convolutional neural networks (CNNs) presented for medical image segmentation. This paper proposes a multiscale Residual Dilated convolution neural network (MSRD-UNet) based on U-Net. MSRD-UNet replaced the traditional convolution block with a novel deeper block that fuses multi-layer features using dilated and residual convolution. In addition, the squeeze and execution attention mechanism (SE) and the s
... Show MoreThis article investigates how an appropriate chaotic map (Logistic, Tent, Henon, Sine...) should be selected taking into consideration its advantages and disadvantages in regard to a picture encipherment. Does the selection of an appropriate map depend on the image properties? The proposed system shows relevant properties of the image influence in the evaluation process of the selected chaotic map. The first chapter discusses the main principles of chaos theory, its applicability to image encryption including various sorts of chaotic maps and their math. Also this research explores the factors that determine security and efficiency of such a map. Hence the approach presents practical standpoint to the extent that certain chaos maps will bec
... Show MoreThe effect of using three different interpolation methods (nearest neighbour, linear and non-linear) on a 3D sinogram to restore the missing data due to using angular difference greater than 1° (considered as optimum 3D sinogram) is presented. Two reconstruction methods are adopted in this study, the back-projection method and Fourier slice theorem method, from the results the second reconstruction proven to be a promising reconstruction with the linear interpolation method when the angular difference is less than 20°.
With the rapid development of smart devices, people's lives have become easier, especially for visually disabled or special-needs people. The new achievements in the fields of machine learning and deep learning let people identify and recognise the surrounding environment. In this study, the efficiency and high performance of deep learning architecture are used to build an image classification system in both indoor and outdoor environments. The proposed methodology starts with collecting two datasets (indoor and outdoor) from different separate datasets. In the second step, the collected dataset is split into training, validation, and test sets. The pre-trained GoogleNet and MobileNet-V2 models are trained using the indoor and outdoor se
... Show MoreThe recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show MoreThe searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s
... Show MoreArchitecture has evolved through the ages as forms, relationships, materials and mechanisms according to the data of each era and up to the era of digital technology, where the change in proportions and aesthetic dimensions of contemporary architectural formation due to the capabilities of digitization has created innovative plastic properties using the void formation in the facades and the introduction of void as a formative and aesthetic element, which led to The emergence of new creative concepts and ideas that contradict traditional ideas and are consistent with the spirit of the times, led to a revolution in the world of architectural form at the level of (architectural ideas and the generation of shapes, materials and construction
... Show More