Spatial data analysis is performed in order to remove the skewness, a measure of the asymmetry of the probablitiy distribution. It also improve the normality, a key concept of statistics from the concept of normal distribution “bell shape”, of the properties like improving the normality porosity, permeability and saturation which can be are visualized by using histograms. Three steps of spatial analysis are involved here; exploratory data analysis, variogram analysis and finally distributing the properties by using geostatistical algorithms for the properties. Mishrif Formation (unit MB1) in Nasiriya Oil Field was chosen to analyze and model the data for the first eight wells. The field is an anticline structure with northwest- southeast general trend. Mishrif Formation is the important middle cretaceous carbonate formation in the stratigraphic column of southern Iraq. The result of applying spatial data analysis showed the nature and quantitative summary of data and so it would be easy to remove the skewness and improve the normality of the petrophysical properties for suitable distribution by the algorithms. It also showed that unit MB1 in Mishrif Fromation contains good properties in which high porosity (0.182) and permeability (7.36 md) with low values of water saturation (0.285) that make it suitable for the accumulation of oil.
The electrospun nanofibers membranes have gained considerable interest in water filtration applications. In this work, the fabrication and characterization of the electrospun polyacrylonitrile-based nonwoven nanofibers membrane are reported. Then, the membrane's performance and antifouling properties were evaluated in removing emulsified oil using a cross flow filtration system. The membranes were fabricated with different polyacrylonitrile (PAN) concentrations (8, 11, and 14 wt. %) in N, N-Dimethylformamide (DMF) solvent resulted in various average fiber sizes, porosity, contact angle, permeability, oil rejection, and antifouling properties. Analyses of surface morphology of the fabricated membranes before and after oil removal revealed
... Show MoreIn this study, the upgrading of Iraqi heavy crude oil was achieved utilizing the solvent deasphalting approach (SDA) and enhanced solvent deasphalting (e-SDA) by adding Nanosilica (NS). The NS was synthesized from local sand. The XRD result, referred to as the amorphous phase, has a wide peak at 2Θ= (22 - 23º) The inclusion of hydrogen-bonded silanol groups (Si–O–H) and siloxane groups (Si–O–Si) in the FTIR spectra. The SDA process was handled using n-pentane solvent at various solvent to oil ratios (SOR) (4-16/1ml/g), room and reflux temperature, and 0.5 h mixing time. In the e-SDA process, various fractions of the NS (1–7 wt.%) have been utilized with 61 nm particle size and 560.86 m²/g surface area in the presence of 12 m
... Show MoreOily wastewater is one of the most challenging streams to deal with especially if the oil exists in emulsified form. In this study, electrospinning method was used to prepare nanofiberous polyvinylidene fluoride (PVDF) membranes and study their performance in oil removal. Graphene particles were embedded in the electrospun PVDF membrane to enhance the efficiency of the membranes. The prepared membranes were characterized using a scanning electron microscopy (SEM) to verify the graphene stabilization on the surface of the membrane homogeneously; while FTIR was used to detect the functional groups on the membrane surface. The membrane wettability was assessed by measuring the contact angle. The PVDF and PVDF / Graphene membranes efficiency
... Show MoreIn data transmission a change in single bit in the received data may lead to miss understanding or a disaster. Each bit in the sent information has high priority especially with information such as the address of the receiver. The importance of error detection with each single change is a key issue in data transmission field.
The ordinary single parity detection method can detect odd number of errors efficiently, but fails with even number of errors. Other detection methods such as two-dimensional and checksum showed better results and failed to cope with the increasing number of errors.
Two novel methods were suggested to detect the binary bit change errors when transmitting data in a noisy media.Those methods were: 2D-Checksum me
Machine learning has a significant advantage for many difficulties in the oil and gas industry, especially when it comes to resolving complex challenges in reservoir characterization. Permeability is one of the most difficult petrophysical parameters to predict using conventional logging techniques. Clarifications of the work flow methodology are presented alongside comprehensive models in this study. The purpose of this study is to provide a more robust technique for predicting permeability; previous studies on the Bazirgan field have attempted to do so, but their estimates have been vague, and the methods they give are obsolete and do not make any concessions to the real or rigid in order to solve the permeability computation. To
... Show MoreReliable data transfer and energy efficiency are the essential considerations for network performance in resource-constrained underwater environments. One of the efficient approaches for data routing in underwater wireless sensor networks (UWSNs) is clustering, in which the data packets are transferred from sensor nodes to the cluster head (CH). Data packets are then forwarded to a sink node in a single or multiple hops manners, which can possibly increase energy depletion of the CH as compared to other nodes. While several mechanisms have been proposed for cluster formation and CH selection to ensure efficient delivery of data packets, less attention has been given to massive data co
Most of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B
... Show MoreIn data mining, classification is a form of data analysis that can be used to extract models describing important data classes. Two of the well known algorithms used in data mining classification are Backpropagation Neural Network (BNN) and Naïve Bayesian (NB). This paper investigates the performance of these two classification methods using the Car Evaluation dataset. Two models were built for both algorithms and the results were compared. Our experimental results indicated that the BNN classifier yield higher accuracy as compared to the NB classifier but it is less efficient because it is time-consuming and difficult to analyze due to its black-box implementation.
In this study, we made a comparison between LASSO & SCAD methods, which are two special methods for dealing with models in partial quantile regression. (Nadaraya & Watson Kernel) was used to estimate the non-parametric part ;in addition, the rule of thumb method was used to estimate the smoothing bandwidth (h). Penalty methods proved to be efficient in estimating the regression coefficients, but the SCAD method according to the mean squared error criterion (MSE) was the best after estimating the missing data using the mean imputation method
This paper present the fast and robust approach of English text encryption and decryption based on Pascal matrix. The technique of encryption the Arabic or English text or both and show the result when apply this method on plain text (original message) and how will form the intelligible plain text to be unintelligible plain text in order to secure information from unauthorized access and from steel information, an encryption scheme usually uses a pseudo-random enecryption key generated by an algorithm. All this done by using Pascal matrix. Encryption and decryption are done by using MATLAB as programming language and notepad ++to write the input text.This paper present the fast and robust approach of English text encryption and decryption b
... Show More