The concept of intertextuality was one of the problems that occupied the attention of critics and critics in targeting the structure of textual intertextuality between texts and their overlap in the process of producing meaning. Until intertextuality became a stable term and it can be monitored in the structure of the theatrical text and determining the mechanisms of this intertextuality between texts through fields and classifications agreed upon by the most important critics who wrote and considered intertextuality. Perhaps our previous research (the approach of exposure in the epistemological hallway to intertextuality) was an attempt to interview a terminology, which the researcher intended to monitor, through the mechanisms of intertextuality and other innovative mechanisms, a critical method to read the relational structure of the presentation as it is in the texts, so the new comes in our current (applied) research that it is an experimental attempt Another in the field of (exposure) here to monitor the same relational structure by trying to activate the mechanisms of (exposure), but this time within (the one play with multiple scenes), and here lies the uniqueness of this attempt in the practical process of a critical dramaturgy where the researcher will analyze a single show composed of several scenes according to mechanisms The (reproductive theatrical performances) were attached to exposure after he conducted a (dramatic) experience of the show he chose (as a research sample) to prove the possibility of employing the mechanisms of (exposure) in analyzing (multi-viewing presentation) in the same critical and analytical method that he worked on in his previous research. Perhaps here lies the novelty of this research, according to the researcher's estimation
The estimation of the regular regression model requires several assumptions to be satisfied such as "linearity". One problem occurs by partitioning the regression curve into two (or more) parts and then joining them by threshold point(s). This situation is regarded as a linearity violation of regression. Therefore, the multiphase regression model is received increasing attention as an alternative approach which describes the changing of the behavior of the phenomenon through threshold point estimation. Maximum likelihood estimator "MLE" has been used in both model and threshold point estimations. However, MLE is not resistant against violations such as outliers' existence or in case of the heavy-tailed error distribution. The main goal of t
... Show MoreVariable selection in Poisson regression with high dimensional data has been widely used in recent years. we proposed in this paper using a penalty function that depends on a function named a penalty. An Atan estimator was compared with Lasso and adaptive lasso. A simulation and application show that an Atan estimator has the advantage in the estimation of coefficient and variables selection.
The purpose behind building the linear regression model is to describe the real linear relation between any explanatory variable in the model and the dependent one, on the basis of the fact that the dependent variable is a linear function of the explanatory variables and one can use it for prediction and control. This purpose does not cometrue without getting significant, stable and reasonable estimatros for the parameters of the model, specifically regression-coefficients. The researcher found that "RUF" the criterian that he had suggested accurate and sufficient to accomplish that purpose when multicollinearity exists provided that the adequate model that satisfies the standard assumpitions of the error-term can be assigned. It
... Show MoreBotnet detection develops a challenging problem in numerous fields such as order, cybersecurity, law, finance, healthcare, and so on. The botnet signifies the group of co-operated Internet connected devices controlled by cyber criminals for starting co-ordinated attacks and applying various malicious events. While the botnet is seamlessly dynamic with developing counter-measures projected by both network and host-based detection techniques, the convention techniques are failed to attain sufficient safety to botnet threats. Thus, machine learning approaches are established for detecting and classifying botnets for cybersecurity. This article presents a novel dragonfly algorithm with multi-class support vector machines enabled botnet
... Show MoreThe non-isothermal crystallization kinetics and crystalline properties of nanocomposites poly butyleneterephthalate, [PBT] /multiwalled-carbon nanotubes (MWCNTs) were tested by differential scanning calorimetry (DSC). PBT/(MWCNTs) nanocomposite was prepared by ultrasonicated of MWCNTs (0.5, 1, 2, 4 wt %) in dichloromethane (DCM) and after that the powdered PBT polymer was added to the MWCNTs solution. The non-isothermal crystallization results show that increasing the MWCNTs contents, decreased the melting temperature (Tm) of PBT/(MWCNTs) nanocomposite as compared with pure PBT, while resulting in improving the degree of crystallinity. These results indicated that a little amount of MWCNTs can be evident strong nucleating agent in PBT na
... Show MoreHistory matching is a significant stage in reservoir modeling for evaluating past reservoir performance and predicting future behavior. This paper is primarily focused on the calibration of the dynamic reservoir model for the Meshrif formation, which is the main reservoir in the Garraf oilfield. A full-field reservoir model with 110 producing wells is constructed using a comprehensive dataset that includes geological, pressure-volume-temperature (PVT), and rock property information. The resulting 3D geologic model provides detailed information on water saturation, permeability, porosity, and net thickness to gross thickness for each grid cell, and forms the basis for constructing the dynamic reservoir model. The dynamic reservoir mo
... Show MoreIn recent years, with the rapid development of the current classification system in digital content identification, automatic classification of images has become the most challenging task in the field of computer vision. As can be seen, vision is quite challenging for a system to automatically understand and analyze images, as compared to the vision of humans. Some research papers have been done to address the issue in the low-level current classification system, but the output was restricted only to basic image features. However, similarly, the approaches fail to accurately classify images. For the results expected in this field, such as computer vision, this study proposes a deep learning approach that utilizes a deep learning algorithm.
... Show MoreThis paper presents a novel idea as it investigates the rescue effect of the prey with fluctuation effect for the first time to propose a modified predator-prey model that forms a non-autonomous model. However, the approximation method is utilized to convert the non-autonomous model to an autonomous one by simplifying the mathematical analysis and following the dynamical behaviors. Some theoretical properties of the proposed autonomous model like the boundedness, stability, and Kolmogorov conditions are studied. This paper's analytical results demonstrate that the dynamic behaviors are globally stable and that the rescue effect improves the likelihood of coexistence compared to when there is no rescue impact. Furthermore, numerical simul
... Show MoreThis study was aimed to measure marketing efficiency and study important factors affecting , using TOBIT qualitative response model for wheat crop in Salahalddin province. Results revealed that independent factors such as (marketing type, crops duration in the field, average marketing cost, distance between farm and marketing center, and average productivity) had an impact on wheat marketing efficiency. This impact varied in size and direction due to value of parameters. Values of marketing efficiency fluctuated within cities and towns in the province. The average value on the province level was 76.75%. This study was recommended developing marketing infrastructures which is essential to efficiency increases. In addition, it is impo
... Show MoreIn this paper, the generation of a chaotic carrier by Lorenz model
is theoretically studied. The encoding techniques has been used is
chaos masking of sinusoidal signal (massage), an optical chaotic
communications system for different receiver configurations is
evaluated. It is proved that chaotic carriers allow the successful
encoding and decoding of messages. Focusing on the effect of
changing the initial conditions of the states of our dynamical system
e.i changing the values (x, y, z, x1, y1, and z1).