Three-dimensional (3D) image and medical image processing, which are considered big data analysis, have attracted significant attention during the last few years. To this end, efficient 3D object recognition techniques could be beneficial to such image and medical image processing. However, to date, most of the proposed methods for 3D object recognition experience major challenges in terms of high computational complexity. This is attributed to the fact that the computational complexity and execution time are increased when the dimensions of the object are increased, which is the case in 3D object recognition. Therefore, finding an efficient method for obtaining high recognition accuracy with low computational complexity is essential. To this end, this paper presents an efficient method for 3D object recognition with low computational complexity. Specifically, the proposed method uses a fast overlapped technique, which deals with higher-order polynomials and high-dimensional objects. The fast overlapped block-processing algorithm reduces the computational complexity of feature extraction. This paper also exploits Charlier polynomials and their moments along with support vector machine (SVM). The evaluation of the presented method is carried out using a well-known dataset, the McGill benchmark dataset. Besides, comparisons are performed with existing 3D object recognition methods. The results show that the proposed 3D object recognition approach achieves high recognition rates under different noisy environments. Furthermore, the results show that the presented method has the potential to mitigate noise distortion and outperforms existing methods in terms of computation time under noise-free and different noisy environments.
This paper presents a hybrid approach for solving null values problem; it hybridizes rough set theory with intelligent swarm algorithm. The proposed approach is a supervised learning model. A large set of complete data called learning data is used to find the decision rule sets that then have been used in solving the incomplete data problem. The intelligent swarm algorithm is used for feature selection which represents bees algorithm as heuristic search algorithm combined with rough set theory as evaluation function. Also another feature selection algorithm called ID3 is presented, it works as statistical algorithm instead of intelligent algorithm. A comparison between those two approaches is made in their performance for null values estima
... Show MoreThis research paper aimed to quantitively characterize the pore structure of shale reservoirs. Six samples of Silurian shale from the Ahnet basin were selected for nitrogen adsorption-desorption analysis. Experimental findings showed that all the samples are mainly composed of mesopores with slit-like shaped pores, as well as the Barrett-Joyner-Halenda pore volume ranging from 0.014 to 0.046 cm3/ 100 g, where the lowest value has recorded in the AHTT-1 sample, whereas the highest one in AHTT-6, while the rest samples (AHTT-2, AHTT-3, AHTT-4, AHTT-5) have a similar average value of 0.03 cm3/ 100 g. Meanwhile, the surface area and pore size distribution were in the range of 3.8 to 11.1 m2 / g and 1.7 to 40 nm, respectively.
... Show MoreThis article aims to estimate the partially linear model by using two methods, which are the Wavelet and Kernel Smoothers. Simulation experiments are used to study the small sample behavior depending on different functions, sample sizes, and variances. Results explained that the wavelet smoother is the best depending on the mean average squares error criterion for all cases that used.
This study was aimed to assess the efficiency of N.oleander to remove heavy metals such as Copper (Cu) from wastewater. A toxicity test was conducted outdoor for 65-day to estimate the ability of N.oleander to tolerate Cu in synthetic wastewater. Based on a previous range-finding test, five concentrations were used in this test (0, 50, 100, 300, 510 mg/l). The results showed that maximum values of removal efficiency was found 99.9% on day-49 for the treatment 50 mg/l. Minimum removal efficiency was 94% day-65 for the treatment of 510 mg/l. Water concentration was within the permissible limits of river conservation and were 0.164 at day-35 for the 50 mg/l treatment, decreased thereafter until the end of the observation, and 0.12 at d
... Show MoreThis research presents a new study in reactive distillation by adopting a consecutive reaction . The adopted consecutive reaction was the saponification reaction of diethyl adipate with NaOH solution. The saponification reaction occurs in two steps. The distillation process had the role of withdrawing the intermediate product i.e. monoethyl adipate from the reacting mixture before the second conversion to disodium adipate occurred. It was found that monoethyl adipate appeared successfully in the distillate liquid. The percentage conversion from di-ester to monoester was greatly enhanced (reaching 86%) relative to only 15.3% for the case of reaction without distillation .This means 5 times enhancement . The presence of two layers in both
... Show MoreThis paper proposes a novel method for generating True Random Numbers (TRNs) using electromechanical switches. The proposed generator is implemented using an FPGA board. The system utilizes the phenomenon of electromechanical switch bounce to produce a randomly fluctuated signal that is used to trigger a counter to generate a binary random number. Compared to other true random number generation methods, the proposed approach features a high degree of randomness using a simple circuit that can be easily built using off-the-shelf components. The proposed system is implemented using a commercial relay circuit connected to an FPGA board that is used to process and record the generated random sequences. Applying statistical testing on th
... Show MoreI've made extensive studies on the distribution of the electric field stable heterogeneous within intensive that contain metal rings with slope diagonal positive to a site halfway to be in its maximum value, followed by decline negative and equally to the other end of the concentrated distributed by electric stable thanking sequentially and have focused empirical studies in the pastthe molecules that you focused Pantqaúha during passage
In this research estimated the parameters of Gumbel distribution Type 1 for Maximum values through the use of two estimation methods:- Moments (MoM) and Modification Moments(MM) Method. the Simulation used for comparison between each of the estimation methods to reach the best method to estimate the parameters where the simulation was to generate random data follow Gumbel distributiondepending on three models of the real values of the parameters for different sample sizes with samples of replicate (R=500).The results of the assessment were put in tables prepared for the purpose of comparison, which made depending on the mean squares error (MSE).