Many consumers of electric power have excesses in their electric power consumptions that exceed the permissible limit by the electrical power distribution stations, and then we proposed a validation approach that works intelligently by applying machine learning (ML) technology to teach electrical consumers how to properly consume without wasting energy expended. The validation approach is one of a large combination of intelligent processes related to energy consumption which is called the efficient energy consumption management (EECM) approaches, and it connected with the internet of things (IoT) technology to be linked to Google Firebase Cloud where a utility center used to check whether the consumption of the efficient energy is satisfied. It divides the measured data for actual power (A_p ) of the electrical model into two portions: the training portion is selected for different maximum actual powers, and the validation portion is determined based on the minimum output power consumption and then used for comparison with the actual required input power. Simulation results show the energy expenditure problem can be solved with good accuracy in energy consumption by reducing the maximum rate (A_p ) in a given time (24) hours for a single house, as well as electricity’s bill cost, is reduced.
The present work includes design, construction and operates of a prototype solar absorption refrigeration system, using methanol as a refrigerant to avoid any refrigerant that cause global warming and greenhouse effect. Flat plate collector was used because it’s easy, ninexpensive and efficient. Many test runs (more than 50) were carried out on the system from May to October, 2013; the main results were taken between the period of July 15, 2013 to August 15, 2013 to find the maximum C.O.P, cooling, temperature and pressure of the system. The system demonstrates a maximum generator temperature of 93.5 oC, on July 18, 2013 at 2:30 pm, and the average mean generator temperature Tgavr was 74.7 °C, for this period. The maximum pressure Pg
... Show MoreThe process of risk assessment in the build-operate transfer (BOT) project is very important to identify and analyze the risks in order to make the appropriate decision to respond to them. In this paper, AHP Technique was used to make the appropriate decision regarding response to the most prominent risks that were generated in BOT projects, which includes a comparison between the criteria for each risk as well as the available alternatives and by mathematical methods using matrices to reach an appropriate decision to respond to each risk.Ten common risks in BOT contracts are adopted for analysis in this paper, which is grouped into six main risk headings.The procedures followed in this paper are the questionnaire method
... Show MoreIn this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent r
... Show MoreMedical image segmentation is one of the most actively studied fields in the past few decades, as the development of modern imaging modalities such as magnetic resonance imaging (MRI) and computed tomography (CT), physicians and technicians nowadays have to process the increasing number and size of medical images. Therefore, efficient and accurate computational segmentation algorithms become necessary to extract the desired information from these large data sets. Moreover, sophisticated segmentation algorithms can help the physicians delineate better the anatomical structures presented in the input images, enhance the accuracy of medical diagnosis and facilitate the best treatment planning. Many of the proposed algorithms could perform w
... Show MoreImage retrieval is used in searching for images from images database. In this paper, content – based image retrieval (CBIR) using four feature extraction techniques has been achieved. The four techniques are colored histogram features technique, properties features technique, gray level co- occurrence matrix (GLCM) statistical features technique and hybrid technique. The features are extracted from the data base images and query (test) images in order to find the similarity measure. The similarity-based matching is very important in CBIR, so, three types of similarity measure are used, normalized Mahalanobis distance, Euclidean distance and Manhattan distance. A comparison between them has been implemented. From the results, it is conclud
... Show MoreMagnetic Resonance Imaging (MRI) is one of the most important diagnostic tool. There are many methods to segment the
tumor of human brain. One of these, the conventional method that uses pure image processing techniques that are not preferred because they need human interaction for accurate segmentation. But unsupervised methods do not require any human interference and can segment the brain with high precision. In this project, the unsupervised classification methods have been used in order to detect the tumor disease from MRI images. These metho
... Show MoreThe computer vision branch of the artificial intelligence field is concerned with developing algorithms for analyzing video image content. Extracting edge information, which is the essential process in most pictorial pattern recognition problems. A new method of edge detection technique has been introduces in this research, for detecting boundaries.
Selection of typical lossy techniques for encoding edge video images are also discussed in this research. The concentration is devoted to discuss the Block-Truncation coding technique and Discrete Cosine Transform (DCT) coding technique. In order to reduce the volume of pictorial data which one may need to store or transmit,
... Show MoreThe current research aims to prepare a proposed Programmebased sensory integration theory for remediating some developmental learning disabilities among children, researchers prepared a Programme based on sensory integration through reviewing studies related to the research topic that can be practicedby some active teaching strategies (cooperative learning, peer learning, Role-playing, and educational stories). The Finalformat consists of(39) training sessions.
Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for