During the 1970s, communicative view of language teaching began to be incorporated into syllabus design. The central question for the proponents of this view was: what does the learner want/need to do with the target language? This lead to the emergence of a teaching method (or approach) called communicative language teaching (CLT) during the late 1970s and early 1980s focusing on the functions that must be incorporated into a classroom. According to Brown (2001:43) CLT is a unified but broadly based, theoretically well informed set of tenets about the nature of language and of language learning and teaching. Harmer (2001:84) states that the communicative approach is the name which was given to a set of beliefs which included not only a re-examination of what aspects of language to teach, but also a shift in emphasis in how to teach. The "what to teach" aspect of the CLT stressed the significance of language functions rather than
151
focusing solely on grammar and vocabulary. The "how to teach" aspect of the CLT is closely related to the idea that language learning will take care of itself, and that plentiful exposure to language in use and plenty of opportunities to use it are vitally important for a student's development of knowledge and skill.
Reliable data transfer and energy efficiency are the essential considerations for network performance in resource-constrained underwater environments. One of the efficient approaches for data routing in underwater wireless sensor networks (UWSNs) is clustering, in which the data packets are transferred from sensor nodes to the cluster head (CH). Data packets are then forwarded to a sink node in a single or multiple hops manners, which can possibly increase energy depletion of the CH as compared to other nodes. While several mechanisms have been proposed for cluster formation and CH selection to ensure efficient delivery of data packets, less attention has been given to massive data co
A two time step stochastic multi-variables multi-sites hydrological data forecasting model was developed and verified using a case study. The philosophy of this model is to use the cross-variables correlations, cross-sites correlations and the two steps time lag correlations simultaneously, for estimating the parameters of the model which then are modified using the mutation process of the genetic algorithm optimization model. The objective function that to be minimized is the Akiake test value. The case study is of four variables and three sites. The variables are the monthly air temperature, humidity, precipitation, and evaporation; the sites are Sulaimania, Chwarta, and Penjwin, which are located north Iraq. The model performance was
... Show MoreElastic magnetic M1 electron scattering form factor has been calculated for the ground state J,T=1/2-,1/2 of 13C. The single-particle model is used with harmonic oscillator wave function. The core-polarization effects are calculated in the first-order perturbation theory including excitations up to 5ħω, using the modified surface delta interaction (MSDI) as a residual interaction. No parameters are introduced in this work. The data are reasonably explained up to q~2.5fm-1 .
This study employs evolutionary optimization and Artificial Intelligence algorithms to determine an individual’s age using a single-faced image as the basis for the identification process. Additionally, we used the WIKI dataset, widely considered the most comprehensive collection of facial images to date, including descriptions of age and gender attributes. However, estimating age from facial images is a recent topic of study, even though much research has been undertaken on establishing chronological age from facial photographs. Retrained artificial neural networks are used for classification after applying reprocessing and optimization techniques to achieve this goal. It is possible that the difficulty of determining age could be reduce
... Show MoreThis paper focuses on developing a self-starting numerical approach that can be used for direct integration of higher-order initial value problems of Ordinary Differential Equations. The method is derived from power series approximation with the resulting equations discretized at the selected grid and off-grid points. The method is applied in a block-by-block approach as a numerical integrator of higher-order initial value problems. The basic properties of the block method are investigated to authenticate its performance and then implemented with some tested experiments to validate the accuracy and convergence of the method.
The virtual decomposition control (VDC) is an efficient tool suitable to deal with the full-dynamics-based control problem of complex robots. However, the regressor-based adaptive control used by VDC to control every subsystem and to estimate the unknown parameters demands specific knowledge about the system physics. Therefore, in this paper, we focus on reorganizing the equation of the VDC for a serial chain manipulator using the adaptive function approximation technique (FAT) without needing specific system physics. The dynamic matrices of the dynamic equation of every subsystem (e.g. link and joint) are approximated by orthogonal functions due to the minimum approximation errors produced. The contr
In this study, we made a comparison between LASSO & SCAD methods, which are two special methods for dealing with models in partial quantile regression. (Nadaraya & Watson Kernel) was used to estimate the non-parametric part ;in addition, the rule of thumb method was used to estimate the smoothing bandwidth (h). Penalty methods proved to be efficient in estimating the regression coefficients, but the SCAD method according to the mean squared error criterion (MSE) was the best after estimating the missing data using the mean imputation method