This article studies a comprehensive methods of edge detection and algorithms in digital images which is reflected a basic process in the field of image processing and analysis. The purpose of edge detection technique is discovering the borders that distinct diverse areas of an image, which donates to refining the understanding of the image contents and extracting structural information. The article starts by clarifying the idea of an edge and its importance in image analysis and studying the most noticeable edge detection methods utilized in this field, (e.g. Sobel, Prewitt, and Canny filters), besides other schemes based on distinguishing unexpected modifications in light intensity and color gradation. The research as well discusses the benefits and limitations of each technique, emphasizing their efficacy in addressing various kinds of images and the dares they face in complex environs. This article offers a comparative analysis of the numerous approaches utilized in edge detection, which assistances in selecting the suitable technique according to the requirements of applications, like video processing, object recognition, medical image analysis, and computer vision.
The construction project is a very complicated work by its nature and requires specialized knowledge to lead it to success. The construction project is complicated socially, technically and economically in its planning, management and implementation aspects due to the fact that it has many variables and multiple stakeholders in addition to being affected by the surrounding environment. Successful projects depend on three fundamental points which are cost-time, performance and specifications. The project stakeholder's objective to achieve best specifications and the cost-time frame stipulated in the contract.
The question is, was the optimum implementation accomplished? The provision for the success of the project
... Show MoreThe topic of urban transformations has attracted the attention of researchers as it is one of the basic issues through which cities can be transformed towards sustainability. A specific level of transformation levels according to a philosophical concept known as a crossing. This article has relied on a specific methodology that aims to find a new approach for urban transformation based on the crossing concept. This concept derives from philosophical entrances based on the concepts of (being, process, becoming, and integration). Four levels have been for the crossing are (normal, ascending, leap, and descending). Each of these levels includes specific characteristics that distinguish it. The results showed that there is no descending
... Show MoreMost reinforced concrete (RC) structures are constructed with square/rectangular columns. The cross-section size of these types of columns is much larger than the thickness of their partitions. Therefore, parts of these columns are protruded out of the partitions. The emergence of columns edges out of the walls has some disadvantages. This limitation is difficult to be overcome with square or rectangular columns. To solve this problem, new types of RC columns called specially shaped reinforced concrete (SSRC) columns have been used as hidden columns. Besides, the use of SSRC columns provides many structural and architectural advantages as compared with rectangular columns. Therefore, this study was conducted to explain the structura
... Show MoreFinding the shortest route in wireless mesh networks is an important aspect. Many techniques are used to solve this problem like dynamic programming, evolutionary algorithms, weighted-sum techniques, and others. In this paper, we use dynamic programming techniques to find the shortest path in wireless mesh networks due to their generality, reduction of complexity and facilitation of numerical computation, simplicity in incorporating constraints, and their onformity to the stochastic nature of some problems. The routing problem is a multi-objective optimization problem with some constraints such as path capacity and end-to-end delay. Single-constraint routing problems and solutions using Dijkstra, Bellman-Ford, and Floyd-Warshall algorith
... Show MoreNowadays, energy demand continuously rises while energy stocks are dwindling. Using current resources more effectively is crucial for the world. A wide method to effectively utilize energy is to generate electricity using thermal gas turbines (GT). One of the most important problems that gas turbines suffer from is high ambient air temperature especially in summer. The current paper details the effects of ambient conditions on the performance of a gas turbine through energy audits taking into account the influence of ambient conditions on the specific heat capacity ( , isentropic exponent ( ) as well as the gas constant of air . A computer program was developed to examine the operation of a power plant at various ambient temperature
... Show MoreOur aim was to investigate the inclusion of sexual and reproductive health and rights (SRHR) topics in medical curricula and the perceived need for, feasibility of, and barriers to teaching SRHR. We distributed a survey with questions on SRHR content, and factors regulating SRHR content, to medical universities worldwide using chain referral. Associations between high SRHR content and independent variables were analyzed using unconditional linear regression or χ2 test. Text data were analyzed by thematic analysis. We collected data from 219 respondents, 143 universities and 54 countries. Clinical SRHR topics such as safe pregnancy and childbirth (95.7%) and contraceptive methods
Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for