CO2 Gas is considered one of the unfavorable gases and it causes great air pollution. It’s possible to decrease this pollution by injecting gas in the oil reservoirs to provide a good miscibility and to increase the oil recovery factor. MMP was estimated by Peng Robinson equation of state (PR-EOS). South Rumila-63 (SULIAY) is involved for which the miscible displacement by is achievable based on the standard criteria for success EOR processes. A PVT report was available for the reservoir under study. It contains deferential liberation (DL) and constant composition expansion (CCE) tests. PVTi software is one of the (Eclipse V.2010) software’s packages, it has been used to achieve the goal. Many trials have been done to match the data of DL test by tuning some of the PR-EOS parameters through the regression analysis process, but no acceptable match was obtained especially for saturation pressure. However; splitting the mole fraction of (C6+) to many pseudo components was carried out, and then a regression analysis process was made again to improve the matching by tuning some of the PR-EOS parameters. A good estimate of saturation pressure and a good match of PVT properties was noted. Ternary diagram has been constructed to represent the phase behavior of -Oil and to calculate MMP for the South Rumila-63 (SULIAY) oil well.
Objective of this work is the mixing between human biometric characteristics and unique attributes of the computer in order to protect computer networks and resources environments through the development of authentication and authorization techniques. In human biometric side has been studying the best methods and algorithms used, and the conclusion is that the fingerprint is the best, but it has some flaws. Fingerprint algorithm has been improved so that their performance can be adapted to enhance the clarity of the edge of the gully structures of pictures fingerprint, taking into account the evaluation of the direction of the nearby edges and repeat. In the side of the computer features, computer and its components like human have uniqu
... Show MoreMost of today’s techniques encrypt all of the image data, which consumes a tremendous amount of time and computational payload. This work introduces a selective image encryption technique that encrypts predetermined bulks of the original image data in order to reduce the encryption/decryption time and the
computational complexity of processing the huge image data. This technique is applying a compression algorithm based on Discrete Cosine Transform (DCT). Two approaches are implemented based on color space conversion as a preprocessing for the compression phases YCbCr and RGB, where the resultant compressed sequence is selectively encrypted using randomly generated combined secret key.
The results showed a significant reduct
<p>In combinatorial testing development, the fabrication of covering arrays is the key challenge by the multiple aspects that influence it. A wide range of combinatorial problems can be solved using metaheuristic and greedy techniques. Combining the greedy technique utilizing a metaheuristic search technique like hill climbing (HC), can produce feasible results for combinatorial tests. Methods based on metaheuristics are used to deal with tuples that may be left after redundancy using greedy strategies; then the result utilization is assured to be near-optimal using a metaheuristic algorithm. As a result, the use of both greedy and HC algorithms in a single test generation system is a good candidate if constructed correctly. T
... Show MoreSocial media is known as detectors platform that are used to measure the activities of the users in the real world. However, the huge and unfiltered feed of messages posted on social media trigger social warnings, particularly when these messages contain hate speech towards specific individual or community. The negative effect of these messages on individuals or the society at large is of great concern to governments and non-governmental organizations. Word clouds provide a simple and efficient means of visually transferring the most common words from text documents. This research aims to develop a word cloud model based on hateful words on online social media environment such as Google News. Several steps are involved including data acq
... Show MoreThis paper include the problem of segmenting an image into regions represent (objects), segment this object by define boundary between two regions using a connected component labeling. Then develop an efficient segmentation algorithm based on this method, to apply the algorithm to image segmentation using different kinds of images, this algorithm consist four steps at the first step convert the image gray level the are applied on the image, these images then in the second step convert to binary image, edge detection using Canny edge detection in third Are applie the final step is images. Best segmentation rates are (90%) obtained when using the developed algorithm compared with (77%) which are obtained using (ccl) before enhancement.
A remarkable correlation between chaotic systems and cryptography has been established with sensitivity to initial states, unpredictability, and complex behaviors. In one development, stages of a chaotic stream cipher are applied to a discrete chaotic dynamic system for the generation of pseudorandom bits. Some of these generators are based on 1D chaotic map and others on 2D ones. In the current study, a pseudorandom bit generator (PRBG) based on a new 2D chaotic logistic map is proposed that runs side-by-side and commences from random independent initial states. The structure of the proposed model consists of the three components of a mouse input device, the proposed 2D chaotic system, and an initial permutation (IP) table. Statist
... Show MoreThis paper proposes a new encryption method. It combines two cipher algorithms, i.e., DES and AES, to generate hybrid keys. This combination strengthens the proposed W-method by generating high randomized keys. Two points can represent the reliability of any encryption technique. Firstly, is the key generation; therefore, our approach merges 64 bits of DES with 64 bits of AES to produce 128 bits as a root key for all remaining keys that are 15. This complexity increases the level of the ciphering process. Moreover, it shifts the operation one bit only to the right. Secondly is the nature of the encryption process. It includes two keys and mixes one round of DES with one round of AES to reduce the performance time. The W-method deals with
... Show MoreThis research depends on the relationship between the reflected spectrum, the nature of each target, area and the percentage of its presence with other targets in the unity of the target area. The changes occur in Land cover have been detected for different years using satellite images based on the Modified Spectral Angle Mapper (MSAM) processing, where Landsat satellite images are utilized using two software programming (MATLAB 7.11 and ERDAS imagine 2014). The proposed supervised classification method (MSAM) using a MATLAB program with supervised classification method (Maximum likelihood Classifier) by ERDAS imagine have been used to get farthest precise results and detect environmental changes for periods. Despite using two classificatio
... Show MoreDisease diagnosis with computer-aided methods has been extensively studied and applied in diagnosing and monitoring of several chronic diseases. Early detection and risk assessment of breast diseases based on clinical data is helpful for doctors to make early diagnosis and monitor the disease progression. The purpose of this study is to exploit the Convolutional Neural Network (CNN) in discriminating breast MRI scans into pathological and healthy. In this study, a fully automated and efficient deep features extraction algorithm that exploits the spatial information obtained from both T2W-TSE and STIR MRI sequences to discriminate between pathological and healthy breast MRI scans. The breast MRI scans are preprocessed prior to the feature
... Show MoreIn data mining and machine learning methods, it is traditionally assumed that training data, test data, and the data that will be processed in the future, should have the same feature space distribution. This is a condition that will not happen in the real world. In order to overcome this challenge, domain adaptation-based methods are used. One of the existing challenges in domain adaptation-based methods is to select the most efficient features so that they can also show the most efficiency in the destination database. In this paper, a new feature selection method based on deep reinforcement learning is proposed. In the proposed method, in order to select the best and most appropriate features, the essential policies
... Show More