The origin of this technique lies in the analysis of François Kenai (1694-1774), the leader of the School of Naturalists, presented in Tableau Economique. This method was developed by Karl Marx in his analysis of the Departmental Relationships and the nature of these relations in the models of " "He said. The current picture of this type of economic analysis is credited to the Russian economist Vasily Leontif. This analytical model is commonly used in developing economic plans in developing countries (p. 1, p. 86). There are several types of input and output models, such as static model, mobile model, regional models, and so on. However, this research will be confined to the open-ended model, which found areas in practical application. It is well known that the term "static" refers to models that do not explicitly take time into account. This model does not show capital accumulation over time. The term "open" refers to the existence of a number of variables (mainly the components of the final request) that are determined outside the model (M 2, p. 47/48). The aim of this research is to focus on the most important uses of the user / product model in building the national economy plan, and we do not want here to lengthen how to build the model and what its problems and advantages
This study was aimed to determine a phytotoxicity experiment with kerosene as a model of a total petroleum hydrocarbon (TPHs) as Kerosene pollutant at different concentrations (1% and 6%) with aeration rate (0 and 1 L/min) and retention time (7, 14, 21, 28 and 42 days), was carried out in a subsurface flow system (SSF) on the Barley wetland. It was noted that greatest elimination 95.7% recorded at 1% kerosene levels and aeration rate 1L / min after a period of 42 days of exposure; whereas it was 47% in the control test without plants. Furthermore, the percent of elimination efficiencies of hydrocarbons from the soil was ranged between 34.155%-95.7% for all TPHs (Kerosene) concentrations at aeration rate (0 and 1 L/min). The Barley c
... Show MoreSentiment analysis is one of the major fields in natural language processing whose main task is to extract sentiments, opinions, attitudes, and emotions from a subjective text. And for its importance in decision making and in people's trust with reviews on web sites, there are many academic researches to address sentiment analysis problems. Deep Learning (DL) is a powerful Machine Learning (ML) technique that has emerged with its ability of feature representation and differentiating data, leading to state-of-the-art prediction results. In recent years, DL has been widely used in sentiment analysis, however, there is scarce in its implementation in the Arabic language field. Most of the previous researches address other l
... Show MoreIn this paper, we implement and examine a Simulink model with electroencephalography (EEG) to control many actuators based on brain waves. This will be in great demand since it will be useful for certain individuals who are unable to access some control units that need direct contact with humans. In the beginning, ten volunteers of a wide range of (20-66) participated in this study, and the statistical measurements were first calculated for all eight channels. Then the number of channels was reduced by half according to the activation of brain regions within the utilized protocol and the processing time also decreased. Consequently, four of the participants (three males and one female) were chosen to examine the Simulink model duri
... Show MoreThe segmentation of aerial images using different clustering techniques offers valuable insights into interpreting and analyzing such images. By partitioning the images into meaningful regions, clustering techniques help identify and differentiate various objects and areas of interest, facilitating various applications, including urban planning, environmental monitoring, and disaster management. This paper aims to segment color aerial images to provide a means of organizing and understanding the visual information contained within the image for various applications and research purposes. It is also important to look into and compare the basic workings of three popular clustering algorithms: K-Medoids, Fuzzy C-Mean (FCM), and Gaussia
... Show MoreBackground: Laser urinary stone lithotripsy is an established endourological modality. Ho:YAG(2100nm) laser have broadened the indications for ureteroscopic stone managements to include larger stone sizes throughout the whole urinary tract.
Purpose: To evaluate the effectiveness and safety of Holmium: YAG(2100nm) laser lithotripsy with a semirigid uretero scope for urinary stone calculi in a prospective cohort of 17 patients.
Patients and Methods: Holmium: YAG(2100nm) laser lithotripsy was performed with a semirigid ureteroscope in 17 patients from September 2016 to December 2016. Calculi were located in the lower ureter in 9 patients (52.9%), the midure
... Show MoreThe computer vision branch of the artificial intelligence field is concerned with developing algorithms for analyzing video image content. Extracting edge information, which is the essential process in most pictorial pattern recognition problems. A new method of edge detection technique has been introduces in this research, for detecting boundaries.
Selection of typical lossy techniques for encoding edge video images are also discussed in this research. The concentration is devoted to discuss the Block-Truncation coding technique and Discrete Cosine Transform (DCT) coding technique. In order to reduce the volume of pictorial data which one may need to store or transmit,
... Show MoreVoice Activity Detection (VAD) is considered as an important pre-processing step in speech processing systems such as speech enhancement, speech recognition, gender and age identification. VAD helps in reducing the time required to process speech data and to improve final system accuracy by focusing the work on the voiced part of the speech. An automatic technique for VAD using Fuzzy-Neuro technique (FN-AVAD) is presented in this paper. The aim of this work is to alleviate the problem of choosing the best threshold value in traditional VAD methods and achieves automaticity by combining fuzzy clustering and machine learning techniques. Four features are extracted from each speech segment, which are short term energy, zero-crossing rate, auto
... Show MoreImage retrieval is used in searching for images from images database. In this paper, content – based image retrieval (CBIR) using four feature extraction techniques has been achieved. The four techniques are colored histogram features technique, properties features technique, gray level co- occurrence matrix (GLCM) statistical features technique and hybrid technique. The features are extracted from the data base images and query (test) images in order to find the similarity measure. The similarity-based matching is very important in CBIR, so, three types of similarity measure are used, normalized Mahalanobis distance, Euclidean distance and Manhattan distance. A comparison between them has been implemented. From the results, it is conclud
... Show More