Confocal microscope imaging has become popular in biotechnology labs. Confocal imaging technology utilizes fluorescence optics, where laser light is focused onto a specific spot at a defined depth in the sample. A considerable number of images are produced regularly during the process of research. These images require methods of unbiased quantification to have meaningful analyses. Increasing efforts to tie reimbursement to outcomes will likely increase the need for objective data in analyzing confocal microscope images in the coming years. Utilizing visual quantification methods to quantify confocal images with naked human eyes is an essential but often underreported outcome measure due to the time required for manual counting and estimation. The current method (visual quantification methods) of image quantification is time-consuming and cumbersome, and manual measurement is imprecise because of the natural differences among human eyes’ abilities. Subsequently, objective outcome evaluation can obviate the drawbacks of the current methods and facilitate recording for documenting function and research purposes. To achieve a fast and valuable objective estimation of fluorescence in each image, an algorithm was designed based on machine vision techniques to extract the targeted objects in images that resulted from confocal images and then estimate the covered area to produce a percentage value similar to the outcome of the current method and is predicted to contribute to sustainable biotechnology image analyses by reducing time and labor consumption. The results show strong evidence that t-designed objective algorithm evaluations can replace the current method of manual and visual quantification methods to the extent that the Intraclass Correlation Coefficient (ICC) is 0.9.
The meniscus has a crucial function in human anatomy, and Magnetic Resonance Imaging (M.R.I.) plays an essential role in meniscus assessment. It is difficult to identify cartilage lesions using typical image processing approaches because the M.R.I. data is so diverse. An M.R.I. data sequence comprises numerous images, and the attributes area we are searching for may differ from each image in the series. Therefore, feature extraction gets more complicated, hence specifically, traditional image processing becomes very complex. In traditional image processing, a human tells a computer what should be there, but a deep learning (D.L.) algorithm extracts the features of what is already there automatically. The surface changes become valuable when
... Show MoreThe digital image with the wavelet tools is increasing nowadays with MATLAB library, by using this method based on invariant moments which are a set of seven moments can be derived from the second and third moments , which can be calculated after converting the image from colored map to gray scale , rescale the image to (512 * 512 ) pixel , dividing the image in to four equal pieces (256 * 256 ) for each piece , then for gray scale image ( 512 * 512 ) and the four pieces (256 * 256 ) calculate wavelet with moment and invariant moment, then store the result with the author ,owner for this image to build data base for the original image to decide the authority of these images by u
... Show MoreMammography is at present one of the available method for early detection of masses or abnormalities which is related to breast cancer. The most common abnormalities that may indicate breast cancer are masses and calcifications. The challenge lies in early and accurate detection to overcome the development of breast cancer that affects more and more women throughout the world. Breast cancer is diagnosed at advanced stages with the help of the digital mammogram images. Masses appear in a mammogram as fine, granular clusters, which are often difficult to identify in a raw mammogram. The incidence of breast cancer in women has increased significantly in recent years.
This paper proposes a computer aided diagnostic system for the extracti
The study aims at explaining the extent to which the principles of educational management reform contribute to Ibn Ashour in achieving educational management reform and the extent to which the pillars of the Kingdom's Vision 2030 in the field of Education in achieving the educational management reform. The study also aims to provide a future vision of what the educational administrative reform and its results should be in the Kingdom during the next ten years. To achieve the goals of the study, the researcher followed two approaches: on the theoretical side, he relied on applying the content analysis method. As for the applied side, the researcher adopted the Delphi method by two questionnaires to ask (36) participants from the experts a
... Show MoreA substantial matter to confidential messages' interchange through the internet is transmission of information safely. For example, digital products' consumers and producers are keen for knowing those products are genuine and must be distinguished from worthless products. Encryption's science can be defined as the technique to embed the data in an images file, audio or videos in a style which should be met the safety requirements. Steganography is a portion of data concealment science that aiming to be reached a coveted security scale in the interchange of private not clear commercial and military data. This research offers a novel technique for steganography based on hiding data inside the clusters that resulted from fuzzy clustering. T
... Show MoreIn this study, chemical oxidation was employed for the synthesis of polypyrrole (PPy) nanofiber. Furthermore, PPy has been subjected to treatment using nanoparticles of neodymium oxide (Nd2O3), which were produced and added in a certain ratio. The inquiry centered on the structural characteristics of the blend of polypyrrole and neodymium oxide after their combination. The investigation utilises X-ray diffraction (XRD), FTIR, and Field Emission Scanning Electron Microscopy (FE-SEM) for PPy, 10%, 30%, and 50% by volume of Nd2O3. According to the electrochemical tests, it has been noted that the nanocomposites exhibit a substantial amount of pseudocapacitive activity.
Various speech enhancement Algorithms (SEA) have been developed in the last few decades. Each algorithm has its advantages and disadvantages because the speech signal is affected by environmental situations. Distortion of speech results in the loss of important features that make this signal challenging to understand. SEA aims to improve the intelligibility and quality of speech that different types of noise have degraded. In most applications, quality improvement is highly desirable as it can reduce listener fatigue, especially when the listener is exposed to high noise levels for extended periods (e.g., manufacturing). SEA reduces or suppresses the background noise to some degree, sometimes called noise suppression alg
... Show More