Among the metaheuristic algorithms, population-based algorithms are an explorative search algorithm superior to the local search algorithm in terms of exploring the search space to find globally optimal solutions. However, the primary downside of such algorithms is their low exploitative capability, which prevents the expansion of the search space neighborhood for more optimal solutions. The firefly algorithm (FA) is a population-based algorithm that has been widely used in clustering problems. However, FA is limited in terms of its premature convergence when no neighborhood search strategies are employed to improve the quality of clustering solutions in the neighborhood region and exploring the global regions in the search space. On the
... Show MoreThis research study Blur groups (Fuzzy Sets) which is the perception of the most modern in the application in various practical and theoretical areas and in various fields of life, was addressed to the fuzzy random variable whose value is not real, but the numbers Millbh because it expresses the mysterious phenomena or uncertain with measurements are not assertive. Fuzzy data were presented for binocular test and analysis of variance method of random Fuzzy variables , where this method depends on a number of assumptions, which is a problem that prevents the use of this method in the case of non-realized.
The Hopfield network is one of the easiest types, and its architecture is such that each neuron in the network connects to the other, thus called a fully connected neural network. In addition, this type is considered auto-associative memory, because the network returns the pattern immediately upon recognition, this network has many limitations, including memory capacity, discrepancy, orthogonally between patterns, weight symmetry, and local minimum. This paper proposes a new strategy for designing Hopfield based on XOR operation; A new strategy is proposed to solve these limitations by suggesting a new algorithm in the Hopfield network design, this strategy will increase the performance of Hopfield by modifying the architecture of t
... Show Moreيهدف البحث الى تقديم استراتيجية مقترحة لشركة نفط الشمال ، وأخذت الاستراتيجية المقترحة بنظر الاعتبار الظروف البيئية المحيطة واعتمدت في صياغتها على اسس وخطوات علمية تتسم بالشمولية والواقعية ، اذ انها غطت الانشطة الرئيسية في الشركة (نشاط الانتاج والاستكشاف , نشاط التكرير والتصفية , التصدير ونقل النفط , نشاط البحث والتطوير , النشاط المالي , تقنية المعلومات , الموارد البشرية ) وقد اعتمد نموذج (David) في التحليل البيئي
... Show MoreThe research aims to present a proposed strategy for the North Oil Company, and the proposed strategy took into account the surrounding environmental conditions and adopted in its formulation on the basis and scientific steps that are comprehensive and realistic, as it covered the main activities of the company (production and exploration activities, refining and refining activities, export and transport of oil, research and development activity, financial activity, information technology, human resources) and the (David) model has been adopted in the environmental analysis of the factors that have been diagnosed according to a
... Show MoreThis research work as an attempt has been made to find robust estimations for Hotelling-T2 test when the data is from amultivariate normal distribution and the sample of the multivariate contain outliers also this research gives an easily high breakdown point robust consistent estimators of multivariate location and dispersion for multivariate analysis by using two types of robust estimators, of these methods are minimum covariance determinant estimator and reweighted minimum covariance determinant estimator.
Big data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show MoreBig data of different types, such as texts and images, are rapidly generated from the internet and other applications. Dealing with this data using traditional methods is not practical since it is available in various sizes, types, and processing speed requirements. Therefore, data analytics has become an important tool because only meaningful information is analyzed and extracted, which makes it essential for big data applications to analyze and extract useful information. This paper presents several innovative methods that use data analytics techniques to improve the analysis process and data management. Furthermore, this paper discusses how the revolution of data analytics based on artificial intelligence algorithms might provide
... Show MoreRegression testing being expensive, requires optimization notion. Typically, the optimization of test cases results in selecting a reduced set or subset of test cases or prioritizing the test cases to detect potential faults at an earlier phase. Many former studies revealed the heuristic-dependent mechanism to attain optimality while reducing or prioritizing test cases. Nevertheless, those studies were deprived of systematic procedures to manage tied test cases issue. Moreover, evolutionary algorithms such as the genetic process often help in depleting test cases, together with a concurrent decrease in computational runtime. However, when examining the fault detection capacity along with other parameters, is required, the method falls sh
... Show More