Automatic document summarization technology is evolving and may offer a solution to the problem of information overload. Multi-document summarization is an optimization problem demanding optimizing more than one objective function concurrently. The proposed work considers a balance of two significant objectives: content coverage and diversity while generating a summary from a collection of text documents. Despite the large efforts introduced from several researchers for designing and evaluating performance of many text summarization techniques, their formulations lack the introduction of any model that can give an explicit representation of – coverage and diversity – the two contradictory semantics of any summary. The design of generic text summarization model based on sentence extraction is modeled as an optimization problem redirected into more semantic measure reflecting individually both content coverage and content diversity as an explicit individual optimization models. The proposed two models are then coupled and defined as a multi-objective optimization (MOO) problem. Up to the best of our knowledge, this is the first attempt to address text summarization problem as a MOO model. Moreover, heuristic perturbation and heuristic local repair operators are proposed and injected into the adopted evolutionary algorithm to harness its strength. Assessment of the proposed model is performed using document sets supplied by Document Understanding Conference 2002 ( ) and a comparison is made with other state-of-the-art methods using Recall-Oriented Understudy for Gisting Evaluation ( ) toolkit. Results obtained support strong proof for the effectiveness of the proposed model based on MOO over other state-of-the-art models.
The present article discusses innovative word-formation processes in Internet texts, the emergence of new derivative words, new affixes, word-formation models, and word-formation methods. Using several neologisms as an example, the article shows both the possibilities of Internet word-making process and the possibilities of studying a newly established work through Internet communication. The words selected for analysis can be attributed to the keywords of the current time. (In particular, the words included in the list of "Words of 2019") there are number of words formed by the suffix method, which is the traditional method of the Russian word formation. A negation of these words is usually made thro
... Show MoreThis study on the plant of Ain –AL Bason Catharanthus roseous showed the ability of callus cells that is produced by In Vitro culture technique and transformed to the accumulated media (MS 40gm/L sucrose ,2gm/L IAA Indole acetic acid , 0.5gm/L Tryptophan) to produce Vinblastine and Vincristine compounds. Extraction, purification and quantitive determination of Vinblastine and Vincristine compounds using High performance liquid chromatography technique (HPLC)were carried out. The results showed that the highest concentration of Vinblastine and Vincristine compounds were ( 4.653,12.5 (ppm /0.5 dry Wight respectively from transformed callus cells from MS 40 gm /L sucrose , 2 gm / L NAA Naphthaline acetic acid .
Survival analysis is widely applied to data that described by the length of time until the occurrence of an event under interest such as death or other important events. The purpose of this paper is to use the dynamic methodology which provides a flexible method, especially in the analysis of discrete survival time, to estimate the effect of covariate variables through time in the survival analysis on dialysis patients with kidney failure until death occurs. Where the estimations process is completely based on the Bayes approach by using two estimation methods: the maximum A Posterior (MAP) involved with Iteratively Weighted Kalman Filter Smoothing (IWKFS) and in combination with the Expectation Maximization (EM) algorithm. While the other
... Show MoreThe haplotype association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease.Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls.It starts with inferring haplotypes from genotypes followed by a haplotype co-classification and marginal screening for disease-associated haplotypes.Unfortunately,phasing uncertainty may have a strong effects on the haplotype co-classification and therefore on the accuracy of predicting risk haplotypes.Here,to address the issue,we propose an alternative approach:In Stage 1,we select potential risk genotypes inste
... Show MoreThe present study analyzes the effect of couple stress fluid (CSF) with the activity of connected inclined magnetic field (IMF) of a non-uniform channel (NUC) through a porous medium (PM), taking into account the sliding speed effect on channel walls and the effect of nonlinear particle size, applying long wavelength and low Reynolds count estimates. The mathematical expressions of axial velocity, stream function, mechanical effect and increase in pressure have been analytically determined. The effect of the physical parameter is included in the present model in the computational results. The results of this algorithm have been presented in chart form by applying the mathematical program.
The Pulse Coupled Oscillator (PCO) has attracted substantial attention and widely used in wireless sensor networks (WSNs), where it utilizes firefly synchronization to attract mating partners, similar to artificial occurrences that mimic natural phenomena. However, the PCO model might not be applicable for simultaneous transmission and data reception because of energy constraints. Thus, an energy-efficient pulse coupled oscillator (EEPCO) has been proposed, which employs the self-organizing method by combining biologically and non-biologically inspired network systems and has proven to reduce the transmission delay and energy consumption of sensor nodes. However, the EEPCO method has only been experimented in attack-free networks without
... Show MoreThe concept of the Extend Nearly Pseudo Quasi-2-Absorbing submodules was recently introduced by Omar A. Abdullah and Haibat K. Mohammadali in 2022, where he studies this concept and it is relationship to previous generalizationsm especially 2-Absorbing submodule and Quasi-2-Absorbing submodule, in addition to studying the most important Propositions, charactarizations and Examples. Now in this research, which is considered a continuation of the definition that was presented earlier, which is the Extend Nearly Pseudo Quasi-2-Absorbing submodules, we have completed the study of this concept in multiplication modules. And the relationship between the Extend Nearly Pseudo Quasi-2-Absorbing submodule and Extend Nearly Pseudo Quasi-2-Abs
... Show MoreThe aim of this study is to propose reliable equations to estimate the in-situ concrete compressive strength from the non-destructive test. Three equations were proposed: the first equation considers the number of rebound hummer only, the second equation consider the ultrasonic pulse velocity only, and the third equation combines the number of rebound hummer and the ultrasonic pulse velocity. The proposed equations were derived from non-linear regression analysis and they were calibrated with the test results of 372 concrete specimens compiled from the literature. The performance of the proposed equations was tested by comparing their strength estimations with those of related existing equations from literature. Comparis
... Show MoreLet G be a graph, each edge e of which is given a weight w(e). The shortest path problem is a path of minimum weight connecting two specified vertices a and b, and from it we have a pre-topology. Furthermore, we study the restriction and separators in pre-topology generated by the shortest path problems. Finally, we study the rate of liaison in pre-topology between two subgraphs. It is formally shown that the new distance measure is a metric
Multivariate Non-Parametric control charts were used to monitoring the data that generated by using the simulation, whether they are within control limits or not. Since that non-parametric methods do not require any assumptions about the distribution of the data. This research aims to apply the multivariate non-parametric quality control methods, which are Multivariate Wilcoxon Signed-Rank ( ) , kernel principal component analysis (KPCA) and k-nearest neighbor ( −