Twitter data analysis is an emerging field of research that utilizes data collected from Twitter to address many issues such as disaster response, sentiment analysis, and demographic studies. The success of data analysis relies on collecting accurate and representative data of the studied group or phenomena to get the best results. Various twitter analysis applications rely on collecting the locations of the users sending the tweets, but this information is not always available. There are several attempts at estimating location based aspects of a tweet. However, there is a lack of attempts on investigating the data collection methods that are focused on location. In this paper, we investigate the two methods for obtaining location-based data provided by Twitter API, Twitter places and Geocode parameters. We studied these methods to determine their accuracy and their suitability for research. The study concludes that the places method is the more accurate, but it excludes a lot of the data, while the geocode method provides us with more data, but special attention needs to be paid to outliers. Copyright © Research Institute for Intelligent Computer Systems, 2018. All rights reserved.
It has become necessary to change from a traditional system to an automated system in production processes, because it has high advantages. The most important of them is improving and increasing production. But there is still a need to improve and develop the work of these systems.
The objective of this work is to study time reduction by combining multiple sequences of operations into one process. To carry out this work, the pneumatic system is designed to decrease\ increase the time of the sequence that performs a pick and place process through optimizing the sequences based on the obstacle dimensions. Three axes are represented using pneumatic cylinders that move according to the sequence used. The system is implemented and con
... Show MoreOrthogonal polynomials and their moments serve as pivotal elements across various fields. Discrete Krawtchouk polynomials (DKraPs) are considered a versatile family of orthogonal polynomials and are widely used in different fields such as probability theory, signal processing, digital communications, and image processing. Various recurrence algorithms have been proposed so far to address the challenge of numerical instability for large values of orders and signal sizes. The computation of DKraP coefficients was typically computed using sequential algorithms, which are computationally extensive for large order values and polynomial sizes. To this end, this paper introduces a computationally efficient solution that utilizes the parall
... Show MoreBackground: Bilastine (BLA) is a second-generation H1 antihistamine used to treat allergic rhinoconjunctivitis. Because of its limited solubility, it falls under class II of the Biopharmaceutics Classification System (BSC). The solid dispersion (SD) approach significantly improves the solubility and dissolution rate of insoluble medicines. Objective: To improve BLA solubility and dissolution rate by formulating a solid dispersion in the form of effervescent granules. Methods: To create BLA SDs, polyvinylpyrrolidone (PVP K30) and poloxamer 188 (PLX188) were mixed in various ratios (1:5, 1:10, and 1:15) using the kneading technique. All formulations were evaluated based on percent yield, drug content, and saturation solubility. The fo
... Show MoreThe problem of frequency estimation of a single sinusoid observed in colored noise is addressed. Our estimator is based on the operation of the sinusoidal digital phase-locked loop (SDPLL) which carries the frequency information in its phase error after the noisy sinusoid has been acquired by the SDPLL. We show by computer simulations that this frequency estimator beats the Cramer-Rao bound (CRB) on the frequency error variance for moderate and high SNRs when the colored noise has a general low-pass filtered (LPF) characteristic, thereby outperforming, in terms of frequency error variance, several existing techniques some of which are, in addition, computationally demanding. Moreover, the present approach generalizes on existing work tha
... Show MoreChemotherapy is one of the most efficient methods for treating cancer patients. Chemotherapy aims to eliminate cancer cells as thoroughly as possible. Delivering medications to patients’ bodies through various methods, either oral or intravenous is part of the chemotherapy process. Different cell-kill hypotheses take into account the interactions of the expansion of the tumor volume, external drugs, and the rate of their eradication. For the control of drug usage and tumor volume, a model based smooth super-twisting control (MBSSTC) is proposed in this paper. Firstly, three nonlinear cell-kill mathematical models are considered in this work, including the log-kill, Norton-Simon, and hypotheses subject to parametric uncertainties and exo
... Show MoreFace recognition is required in various applications, and major progress has been witnessed in this area. Many face recognition algorithms have been proposed thus far; however, achieving high recognition accuracy and low execution time remains a challenge. In this work, a new scheme for face recognition is presented using hybrid orthogonal polynomials to extract features. The embedded image kernel technique is used to decrease the complexity of feature extraction, then a support vector machine is adopted to classify these features. Moreover, a fast-overlapping block processing algorithm for feature extraction is used to reduce the computation time. Extensive evaluation of the proposed method was carried out on two different face ima
... Show MoreAbstract
The grey system model GM(1,1) is the model of the prediction of the time series and the basis of the grey theory. This research presents the methods for estimating parameters of the grey model GM(1,1) is the accumulative method (ACC), the exponential method (EXP), modified exponential method (Mod EXP) and the Particle Swarm Optimization method (PSO). These methods were compared based on the Mean square error (MSE) and the Mean Absolute percentage error (MAPE) as a basis comparator and the simulation method was adopted for the best of the four methods, The best method was obtained and then applied to real data. This data represents the consumption rate of two types of oils a he
... Show MoreCefixime is an antibiotic useful for treating a variety ofmicroorganism infections. In the present work, tworapid, specific, inexpensive and nontoxic methods wereproposed for cefixime determination. Area under curvespectrophotometric and HPLC methods were depictedfor the micro quantification of Cefixime in highly pureand local market formulation. The area under curve(first technique) used in calculation of the cefiximepeak using a UV-visible spectrophotometer.The HPLC (2nd technique) was depended on thepurification of Cefixime by a C18 separating column250mm (length of column) × 4.6 mm (diameter)andusing methanol 50% (organic modifier) and deionizedwater 50% as a mobile phase. The isocratic flow withrate of 1 mL/min was applied, the temper
... Show MoreAfter baking the flour, azodicarbonamide, an approved food additive, can be converted into carcinogenic semicarbazide hydrochloride (SEM) and biurea in flour products. Thus, determine SEM in commercial bread products is become mandatory and need to be performed. Therefore, two accurate, precision, simple and economics colorimetric methods have been developed for the visual detection and quantitative determination of SEM in commercial flour products. The 1st method is based on the formation of a blue-coloured product with λmax at 690 nm as a result of a reaction between the SEM and potassium ferrocyanide in an acidic medium (pH 6.0). In the 2nd method, a brownish-green colored product is formed due to the reaction between the SEM and phosph
... Show More