There are several methods that are used to solve the traditional transportation problems whose units of supply, demand quantities, and cost transportation are known exactly. These methods obtain basic solution, and develop it to the best solution through a series of consecutive calculations to obtain the optimal solution.
The steps are more complex with fuzzy variables, so this paper presents the disadvantages of solutions of the traditional ways with existence of variables in the fuzzy form.
This paper also presents a comparison between the results that emerged after using different conversion ranking formulas to convert from fuzzy form to crisp form on the same numerical example with a full fuzzy form. The problem has been then converted into a linear programming model, and the BIG-M method to be later used to find the optimal solution that represents the number of units transferred from processing or supply centers to a number of demand centers based on the known cost of transportation.
Achieving the goal of the problem is by finding the lowest total transportation cost,
while the comparison is based on that value. The results are presented in a
comprehensive table that organizes data and results in a way that facilitates quick
and accurate comparison. An amendment to one of the order formats was suggested,
because it has different results compared to other formulas. One of the ranking
equations is modified, because it has different results compared to other methods..
Voice Activity Detection (VAD) is considered as an important pre-processing step in speech processing systems such as speech enhancement, speech recognition, gender and age identification. VAD helps in reducing the time required to process speech data and to improve final system accuracy by focusing the work on the voiced part of the speech. An automatic technique for VAD using Fuzzy-Neuro technique (FN-AVAD) is presented in this paper. The aim of this work is to alleviate the problem of choosing the best threshold value in traditional VAD methods and achieves automaticity by combining fuzzy clustering and machine learning techniques. Four features are extracted from each speech segment, which are short term energy, zero-crossing rate, auto
... Show MoreApplications of quantitative methods, which had been explicit attention during previous period (the last two centuries) is the method of application sales man or traveling salesman method. According to this interest by the actual need for a lot of the production sectors and companies that distribute their products, whether locally made or the imported for customers or other industry sectors where most of the productive sectors and companies distributed always aspired to (increase profits, imports, the production quantity, quantity of exports. etc. ...) this is the part of the other hand, want to behave during the process of distribution routes that achieve the best or the least or most appropriate.
... Show MoreCoronavirus disease (COVID-19), which is caused by SARS-CoV-2, has been announced as a global pandemic by the World Health Organization (WHO), which results in the collapsing of the healthcare systems in several countries around the globe. Machine learning (ML) methods are one of the most utilized approaches in artificial intelligence (AI) to classify COVID-19 images. However, there are many machine-learning methods used to classify COVID-19. The question is: which machine learning method is best over multi-criteria evaluation? Therefore, this research presents benchmarking of COVID-19 machine learning methods, which is recognized as a multi-criteria decision-making (MCDM) problem. In the recent century, the trend of developing
... Show MoreUrbanization led to significant changes in the properties of the land surface. That appends additional heat loads at the city, which threaten comfort and health of people. There is unclear understanding represent of the relationship between climate indicators and the features of the early virtual urban design. The research focused on simulation capability, and the affect in urban microclimate. It is assumed that the adoption of certain scenarios and strategies to mitigate the intensity of the UHI leads to the improvement of the local climate and reduce the impact of global warming. The aim is to show on the UHI methods simulation and the programs that supporting simulation and mitigate the effect UHI. UHI reviewed has been conducted the for
... Show MoreIn this paper, the effective computational method (ECM) based on the standard monomial polynomial has been implemented to solve the nonlinear Jeffery-Hamel flow problem. Moreover, novel effective computational methods have been developed and suggested in this study by suitable base functions, namely Chebyshev, Bernstein, Legendre, and Hermite polynomials. The utilization of the base functions converts the nonlinear problem to a nonlinear algebraic system of equations, which is then resolved using the Mathematica®12 program. The development of effective computational methods (D-ECM) has been applied to solve the nonlinear Jeffery-Hamel flow problem, then a comparison between the methods has been shown. Furthermore, the maximum
... Show MoreRumors are typically described as remarks whose true value is unknown. A rumor on social media has the potential to spread erroneous information to a large group of individuals. Those false facts will influence decision-making in a variety of societies. In online social media, where enormous amounts of information are simply distributed over a large network of sources with unverified authority, detecting rumors is critical. This research proposes that rumor detection be done using Natural Language Processing (NLP) tools as well as six distinct Machine Learning (ML) methods (Nave Bayes (NB), random forest (RF), K-nearest neighbor (KNN), Logistic Regression (LR), Stochastic Gradient Descent (SGD) and Decision Tree (
... Show MoreThis paper studies a novel technique based on the use of two effective methods like modified Laplace- variational method (MLVIM) and a new Variational method (MVIM)to solve PDEs with variable coefficients. The current modification for the (MLVIM) is based on coupling of the Variational method (VIM) and Laplace- method (LT). In our proposal there is no need to calculate Lagrange multiplier. We applied Laplace method to the problem .Furthermore, the nonlinear terms for this problem is solved using homotopy method (HPM). Some examples are taken to compare results between two methods and to verify the reliability of our present methods.
In this research, attempt to overcome and quantities the problem of the large number of frequency of dust storms and the areas that generated and then identifying these areas in order to be held by the agricultural areas, as has been the adoption of many of the techniques and methods of processing image in remote sensing and geographic information systems and linking them together to identify those areas in Iraq or the neighbors, especially the northern and north-west wind of the fact that Iraq is in the northern and north - western most days of the year. Research has included the use of images from the satellite (MODIS) with quality (Aqua) and (Terra) with the assembly of the amount of dust, these storms, it was determining the values o
... Show MoreIn this paper Volterra Runge-Kutta methods which include: method of order two and four will be applied to general nonlinear Volterra integral equations of the second kind. Moreover we study the convergent of the algorithms of Volterra Runge-Kutta methods. Finally, programs for each method are written in MATLAB language and a comparison between the two types has been made depending on the least square errors.
In this work, results from an optical technique (laser speckle technique) for measuring surface roughness was done by using statistical properties of speckle pattern from the point of view of computer image texture analysis. Four calibration relationships were used to cover wide range of measurement with the same laser speckle technique. The first one is based on intensity contrast of the speckle, the second is based on analysis of speckle binary image, the third is on size of speckle pattern spot, and the latest one is based on characterization of the energy feature of the gray level co-occurrence matrices for the speckle pattern. By these calibration relationships surface roughness of an object surface can be evaluated within the
... Show More