In this work a study and calculation of the normal approach between two bodies, spherical and rough flat surface, had been conducted by the aid of image processing technique. Four kinds of metals of different work hardening index had been used as a surface specimens and by capturing images of resolution of 0.006565 mm/pixel a good estimate of the normal approach may be obtained the compression tests had been done in strength of material laboratory in mechanical engineering department, a Monsanto tensometer had been used to conduct the indentation tests.
A light section measuring equipment microscope BK 70x50 was used to calculate the surface parameters of the texture profile like standard deviation of asperity peak heights, centre line average, asperity density and the radius of asperities.
A Gaussian distribution of asperity peak height was assumed in calculating the theoretical value of the normal approach in the elastic and plastic regions and where compared with those obtained experimentally to verify the obtained results.
The recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show MoreWith the rapid development of smart devices, people's lives have become easier, especially for visually disabled or special-needs people. The new achievements in the fields of machine learning and deep learning let people identify and recognise the surrounding environment. In this study, the efficiency and high performance of deep learning architecture are used to build an image classification system in both indoor and outdoor environments. The proposed methodology starts with collecting two datasets (indoor and outdoor) from different separate datasets. In the second step, the collected dataset is split into training, validation, and test sets. The pre-trained GoogleNet and MobileNet-V2 models are trained using the indoor and outdoor se
... Show MoreThis article investigates how an appropriate chaotic map (Logistic, Tent, Henon, Sine...) should be selected taking into consideration its advantages and disadvantages in regard to a picture encipherment. Does the selection of an appropriate map depend on the image properties? The proposed system shows relevant properties of the image influence in the evaluation process of the selected chaotic map. The first chapter discusses the main principles of chaos theory, its applicability to image encryption including various sorts of chaotic maps and their math. Also this research explores the factors that determine security and efficiency of such a map. Hence the approach presents practical standpoint to the extent that certain chaos maps will bec
... Show MoreThe searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s
... Show MoreSemantic segmentation is an exciting research topic in medical image analysis because it aims to detect objects in medical images. In recent years, approaches based on deep learning have shown a more reliable performance than traditional approaches in medical image segmentation. The U-Net network is one of the most successful end-to-end convolutional neural networks (CNNs) presented for medical image segmentation. This paper proposes a multiscale Residual Dilated convolution neural network (MSRD-UNet) based on U-Net. MSRD-UNet replaced the traditional convolution block with a novel deeper block that fuses multi-layer features using dilated and residual convolution. In addition, the squeeze and execution attention mechanism (SE) and the s
... Show MoreThe current study introduces a novel technique to handle electrochemical localized corrosion in certain limited regions rather than applying comprehensive cathodic protection (CP) treatment. An impressed current cathodic protection cell (ICCPC) was fabricated and firmly installed on the middle of a steel structure surface to deter localized corrosion in fixed or mobile steel structures. The designed ICCPC comprises three essential parts: an anode, a cathode, and an artificial electrolyte. The latter was developed to mimic the function of the natural electrolyte in CP. A proportional-integrated-derivative (PID) controller was designed to stabilize this potential below the ICCPC at a cathodic potential of −850 mV, which is crucial for prote
... Show MoreBackground: Spleen is a hemopoietic organ which is capable of supporting elements of different systems. It is affected by several groups of diseases; inflammatory, hematopoietic, reticuloendothelial proliferation, portal hypertension and storage diseases. Ultrasound (US) may detect mild splenomegaly before it is clinically palpable. Knowledge of the normal range of spleen size in the population being examined is a prerequisite. Racial differences in splenic length could result in incorrect interpretation of splenic measurements and such differences would make it difficult to standardize expected splenic length and to determine non- palpable splenic enlargement.Objectives: To measure the normal values of splenic lengthin Iraqi subjects an
... Show MoreGeophysical data interpretation is crucial in characterizing the subsurface structure. The Bouguer gravity map analysis of the W-NW region of Iraq serves as the basis for the current geophysical research. The Bouguer gravity data were processed using the Power Spectrum Analysis method. Four depth slices have been acquired after the PSA process, which are: 390 m, 1300 m, 3040 m, and 12600 m depth. The gravity anomaly depth maps show that shallow-depth anomalies are mainly related to the sedimentary cover layers and structures, while the gravity anomaly of the deeper depth slice of 12600 m is more presented to the basement rocks and mantle uplift. The 2D modeling technique was used for