Modeling data acquisition systems (DASs) can support the vehicle industry in the development and design of sophisticated driver assistance systems. Modeling DASs on the basis of multiple criteria is considered as a multicriteria decision-making (MCDM) problem. Although literature reviews have provided models for DASs, the issue of imprecise, unclear, and ambiguous information remains unresolved. Compared with existing MCDM methods, the robustness of the fuzzy decision by opinion score method II (FDOSM II) and fuzzy weighted with zero inconsistency II (FWZIC II) is demonstrated for modeling the DASs. However, these methods are implemented in an intuitionistic fuzzy set environment that restricts the ability of experts to provide membership and nonmembership degrees freely, simulate real-world ambiguity efficiently, utilize a narrow fuzzy number space, and deal with interval data. Thus, this study used a more efficient fuzzy environment interval-valued linear Diophantine fuzzy set (IVLDF) with FWZIC II for criterion weighting and IVLDF with FDOSM for DAS modeling to address the issues and support industrial community characteristics in the design and implementation of advanced driver assistance systems in vehicles. The proposed methodology comprises two consecutive phases. The first phase involves adapting a decision matrix that intersects DAS alternatives and criteria. The second phase (development phase) proposes a decision modeling approach based on formulation of IVLD-FWZIC II and IVLD-FDOSM II to model DASs. A total of 14 DASs were modeled on the basis of 15 DAS criteria, including seven subcriteria for “comprehensive complexity assessment” and eight subcriteria for “design and implementation,” which had a remarkable effect on the DAS design when implemented by industrial communities. Systematic ranking, sensitivity analysis, and modeling checklists were conducted to demonstrate that the modeling results were subject to systematic ranking, as indicated by the high correlations across all described scenarios of changing criterion weight values, supporting the most important research points, and proposing a value-adding process in modeling the most desirable DAS.
Anomaly detection is still a difficult task. To address this problem, we propose to strengthen DBSCAN algorithm for the data by converting all data to the graph concept frame (CFG). As is well known that the work DBSCAN method used to compile the data set belong to the same species in a while it will be considered in the external behavior of the cluster as a noise or anomalies. It can detect anomalies by DBSCAN algorithm can detect abnormal points that are far from certain set threshold (extremism). However, the abnormalities are not those cases, abnormal and unusual or far from a specific group, There is a type of data that is do not happen repeatedly, but are considered abnormal for the group of known. The analysis showed DBSCAN using the
... Show MoreAbstract
The study aims to build a training program based on the Connectivism Theory to develop e-learning competencies for Islamic education teachers in the Governorate of Dhofar, as well as to identify its effectiveness. The study sample consisted of (30) Islamic education teachers to implement the training program, they were randomly selected. The study used the descriptive approach to determine the electronic competencies and build the training program, and the quasi-experimental approach to determine the effectiveness of the program. The study tools were the cognitive achievement test and the observation card, which were applied before and after. The study found that the effectiveness of the training program
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show MoreThe theory of probabilistic programming may be conceived in several different ways. As a method of programming it analyses the implications of probabilistic variations in the parameter space of linear or nonlinear programming model. The generating mechanism of such probabilistic variations in the economic models may be due to incomplete information about changes in demand, production and technology, specification errors about the econometric relations presumed for different economic agents, uncertainty of various sorts and the consequences of imperfect aggregation or disaggregating of economic variables. In this Research we discuss the probabilistic programming problem when the coefficient bi is random variable
... Show MoreSome experiments need to know the extent of their usefulness to continue providing them or not. This is done through the fuzzy regression discontinuous model, where the Epanechnikov Kernel and Triangular Kernel were used to estimate the model by generating data from the Monte Carlo experiment and comparing the results obtained. It was found that the. Epanechnikov Kernel has a least mean squared error.
Summary The objective of the research is to learn the design of a learning educational learning according to the theory of Ausubel in the acquisition of geographical concepts among the students of the fourth primary in the field of geography and the development of their habits of mind. To achieve this, the researcher relied on the two hypotheses the researcher used the design of equal groups the first experimental group was studied according to the design educational educational learning according to the theory and the other is an officer according to the traditional method. The research community consists of fourth grade pupils in primary school day for girls in the Directorate of Education Baghdad, Al-Rusafa, the third academic year 20
... Show MoreShallow foundations have been commonly used to transfer load to soil layer within the permissible limits of settlement based on the bearing capacity of the soil. For most practical cases, the shape of the shallow foundation is of slight significance. Also, friction resistance forces in the first layers of soils are negligible due to non-sufficient surrounding surface area and compaction conditions. However, the bearing capacity of a shallow foundation can be increased by several techniques. Geocell is one of the geosynthetic tool applied mainly to reinforce soil. This study presents a numerical approach of honeycombed geocell steel panels reinforcing the sandy soil under shallow foundation, and several parameters are investigated such as th
... Show MoreIn this work the effect of choosing tri-circular tube section had been addressed to minimize the end effector’s error, a comparison had been made between the tri-tube section and the traditional square cross section for a robot arm, the study shows that for the same weight of square section and tri-tube section the error may be reduced by about 33%.
A program had been built up by the use of MathCAD software to calculate the minimum weight of a square section robot arm that could with stand a given pay load and gives a minimum deflection. The second part of the program makes an optimization process for the dimension of the cross section and gives the dimensions of the tri-circular tube cross section that have the same weight of
... Show More