Support vector machines (SVMs) are supervised learning models that analyze data for classification or regression. For classification, SVM is widely used by selecting an optimal hyperplane that separates two classes. SVM has very good accuracy and extremally robust comparing with some other classification methods such as logistics linear regression, random forest, k-nearest neighbor and naïve model. However, working with large datasets can cause many problems such as time-consuming and inefficient results. In this paper, the SVM has been modified by using a stochastic Gradient descent process. The modified method, stochastic gradient descent SVM (SGD-SVM), checked by using two simulation datasets. Since the classification of different cancer types is important for cancer diagnosis and drug discovery, SGD-SVM is applied for classifying the most common leukemia cancer type dataset. The results that are gotten using SGD-SVM are much accurate than other results of many studies that used the same leukemia datasets.
In this paper, new concepts which are called: left derivations and generalized left derivations in nearrings have been defined. Furthermore, the commutativity of the 3-prime near-ring which involves some
algebraic identities on generalized left derivation has been studied.
The propagation of laser beam in the underdense deuterium plasma has been studied via computer simulation using the fluid model. An appropriate computer code “HEATER” has been modified and is used for this purpose. The propagation is taken to be in a cylindrical symmetric medium. Different laser wavelengths (1 = 10.6 m, 2 = 1.06 m, and 3 = 0.53 m) with a Gaussian pulse type and 15 ns pulse widths have been considered. Absorption energy and laser flux have been calculated for different plasma and laser parameters. The absorbed laser energy showed maximum for = 0.53 m. This high absorbitivity was inferred to the effect of the pondermotive force.
It is well-known that the existence of outliers in the data will adversely affect the efficiency of estimation and results of the current study. In this paper four methods will be studied to detect outliers for the multiple linear regression model in two cases : first, in real data; and secondly, after adding the outliers to data and the attempt to detect it. The study is conducted for samples with different sizes, and uses three measures for comparing between these methods . These three measures are : the mask, dumping and standard error of the estimate.
The research included studying the effect of different plowing depths (10,20and30) cm and three angles of the disc harrows (18,20and25) when they were combined in one compound machine consisting of a triple plow and disc harrows tied within one structure. Draft force, fuel consumption, practical productivity, and resistance to soil penetration. The results indicated that the plowing depth and disc angle had a significant effect on all studied parameters. The results showed that when the plowing depth increased and the disc angle increased, leads to increased pull force ratio, fuel consumption, resistance to soil penetration, and reduce the machine practical productivity.
The investigation of machine learning techniques for addressing missing well-log data has garnered considerable interest recently, especially as the oil and gas sector pursues novel approaches to improve data interpretation and reservoir characterization. Conversely, for wells that have been in operation for several years, conventional measurement techniques frequently encounter challenges related to availability, including the lack of well-log data, cost considerations, and precision issues. This study's objective is to enhance reservoir characterization by automating well-log creation using machine-learning techniques. Among the methods are multi-resolution graph-based clustering and the similarity threshold method. By using cutti
... Show MoreA resume is the first impression between you and a potential employer. Therefore, the importance of a resume can never be underestimated. Selecting the right candidates for a job within a company can be a daunting task for recruiters when they have to review hundreds of resumes. To reduce time and effort, we can use NLTK and Natural Language Processing (NLP) techniques to extract essential data from a resume. NLTK is a free, open source, community-driven project and the leading platform for building Python programs to work with human language data. To select the best resume according to the company’s requirements, an algorithm such as KNN is used. To be selected from hundreds of resumes, your resume must be one of the best. Theref
... Show MoreAn easy, eclectic, precise high-Performance Liquid Chromatographic (HPLC) procedure was evolved and validated to estimate of Piroxicam and Codeine phosphate. Chromatographic demarcation was accomplished on a C18 column [Use BDS Hypersil C18, 5μ, 150 x 4.6 mm] using a mobile phase of methanol: phosphate buffer (60:40, v/v, pH=2.3), the flow rate was 1.1 mL/min, UV detection was at 214 nm. System Suitability tests (SSTs) are typically performed to assess the suitability and effectiveness of the entire chromatography system. The retention time for Piroxicam was found to be 3.95 minutes and 1.46 minutes for Codeine phosphate. The evolved method has been validated through precision, limit of quantitation, specificity,
... Show More