Corpus linguistics is a methodology in studying language through corpus-based research. It differs from a traditional approach in studying a language (prescriptive approach) in its insistence on the systematic study of authentic examples of language in use (descriptive approach).A “corpus” is a large body of machine-readable structurally collected naturally occurring linguistic data, either written texts or a transcription of recorded speech, which can be used as a starting-point of linguistic description or as a means of verifying hypotheses about a language. In the past decade, interest has grown tremendously in the use of language corpora for language education. The ways in which corpora have been employed in language pedagogy can be divided into two main categories: indirect and direct application. In the former corpora are used in designing and developing the syllabuses, dictionaries, tests, and teaching materials, while later, corpus data are used for data-driven learning and what is known as grammar safari. So this research aims at employing corpus data in teaching Arabic grammar. Functional syntactic analysis of corpus is a must for this purpose, so Quranic corpus is the most appropriate one. In this paper,I examined the specification phenomenon (التمييز) in Quranic corpus and compared the descriptive and prescriptive approaches in teaching causative object as a grammatical phenomenon in Arabic.
This paper presents a study of the application of gas lift (GL) to improve oil production in a Middle East field. The field has been experiencing a rapid decline in production due to a drop in reservoir pressure. GL is a widely used artificial lift technique that can be used to increase oil production by reducing the hydrostatic pressure in the wellbore. The study used a full field model to simulate the effects of GL on production. The model was run under different production scenarios, including different water cut and reservoir pressure values. The results showed that GL can significantly increase oil production under all scenarios. The study also found that most wells in the field will soon be closed due to high water cuts. Howev
... Show MoreThis paper proposes a new encryption method. It combines two cipher algorithms, i.e., DES and AES, to generate hybrid keys. This combination strengthens the proposed W-method by generating high randomized keys. Two points can represent the reliability of any encryption technique. Firstly, is the key generation; therefore, our approach merges 64 bits of DES with 64 bits of AES to produce 128 bits as a root key for all remaining keys that are 15. This complexity increases the level of the ciphering process. Moreover, it shifts the operation one bit only to the right. Secondly is the nature of the encryption process. It includes two keys and mixes one round of DES with one round of AES to reduce the performance time. The W-method deals with
... Show MoreIn this paper, a new equivalent lumped parameter model is proposed for describing the vibration of beams under the moving load effect. Also, an analytical formula for calculating such vibration for low-speed loads is presented. Furthermore, a MATLAB/Simulink model is introduced to give a simple and accurate solution that can be used to design beams subjected to any moving loads, i.e., loads of any magnitude and speed. In general, the proposed Simulink model can be used much easier than the alternative FEM software, which is usually used in designing such beams. The obtained results from the analytical formula and the proposed Simulink model were compared with those obtained from Ansys R19.0, and very good agreement has been shown. I
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreIn data transmission a change in single bit in the received data may lead to miss understanding or a disaster. Each bit in the sent information has high priority especially with information such as the address of the receiver. The importance of error detection with each single change is a key issue in data transmission field.
The ordinary single parity detection method can detect odd number of errors efficiently, but fails with even number of errors. Other detection methods such as two-dimensional and checksum showed better results and failed to cope with the increasing number of errors.
Two novel methods were suggested to detect the binary bit change errors when transmitting data in a noisy media.Those methods were: 2D-Checksum me
Most of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B
... Show MoreВ статье рассматривается вопрос об использовании мультимедийных средств для оптимизации процесса формирования коммуникативной компетенции в иракской аудитории с привлечением компьютерных технологий. Статья посвящена использованию мультимедийных технологий и различных приемов формирования интереса к русскому языку. Включение в процесс обучения коммуникативно-значимого, аутентичн
... Show More