The North Central Coast of Vietnam has a wide distribution of loose sand which is often exposed on the surface. The thickness changes from a few meters to over ten meters. This sand with the loose state can be sensitive to the dynamic loads, such as earthquakes, traffic load, or machine foundations. It can be liquefied under these loadings, which might destroy the ground and buildings. The Standard Penetration Test (SPT) is widely used in engineering practice and its values can be useful for the assessment of soil liquefaction potential. Thus, this article presents some ground profiles in some sites in the North Central Coast of Vietnam and determines the liquefaction potential of sand based on SPT and using three parameters, including the Factor of Safety against Liquefaction (FSLIQ), Liquefaction Potential Index (LPI), and Liquefaction Severity Number (LSN). The research results show that the FSLIQ, LPI, and LSN values depend on the depth of sand samples and the SPT values. In this study, the sand distributed from 2.0 to 18.0m with (N1)60cs value of less than 20 has high liquefaction potential with FSLIQ<1, LPI is often higher than 0.73, and LSN is often higher than 10. The results also show that many soil profiles have high liquefaction potential. These results should be considered for construction activities in this area.
Cryptography can be thought of as a toolbox, where potential attackers gain access to various computing resources and technologies to try to compute key values. In modern cryptography, the strength of the encryption algorithm is only determined by the size of the key. Therefore, our goal is to create a strong key value that has a minimum bit length that will be useful in light encryption. Using elliptic curve cryptography (ECC) with Rubik's cube and image density, the image colors are combined and distorted, and by using the Chaotic Logistics Map and Image Density with a secret key, the Rubik's cubes for the image are encrypted, obtaining a secure image against attacks. ECC itself is a powerful algorithm that generates a pair of p
... Show MoreIn this paper a refractive index sensor based on micro-structured optical fiber has been proposed using Finite Element Method (FEM). The designed fiber has a hexagonal cladding structure with six air holes rings running around its solid core. The air holes of fiber has been infiltrated with different liquids such as water , ethanol, methanol, and toluene then sensor characteristics like ; effective refractive index , confinement loss, beam profile of the fundamental mode, and sensor resolution are investigated by employing the FEM. This designed sensor characterized by its low confinement loss and high resolution so a small change in the analyte refractive index could be detect which is could be useful to detect the change of
... Show MoreSocial media and news agencies are major sources for tracking news and events. With these sources' massive amounts of data, it is easy to spread false or misleading information. Given the great dangers of fake news to societies, previous studies have given great attention to detecting it and limiting its impact. As such, this work aims to use modern deep learning techniques to detect Arabic fake news. In the proposed system, the attention model is adapted with bidirectional long-short-term memory (Bi-LSTM) to identify the most informative words in the sentence. Then, a multi-layer perceptron (MLP) is applied to classify news articles as fake or real. The experiments are conducted on a newly launched Arabic dataset called the Ara
... Show MoreObjective of this work is the mixing between human biometric characteristics and unique attributes of the computer in order to protect computer networks and resources environments through the development of authentication and authorization techniques. In human biometric side has been studying the best methods and algorithms used, and the conclusion is that the fingerprint is the best, but it has some flaws. Fingerprint algorithm has been improved so that their performance can be adapted to enhance the clarity of the edge of the gully structures of pictures fingerprint, taking into account the evaluation of the direction of the nearby edges and repeat. In the side of the computer features, computer and its components like human have uniqu
... Show MoreMost of today’s techniques encrypt all of the image data, which consumes a tremendous amount of time and computational payload. This work introduces a selective image encryption technique that encrypts predetermined bulks of the original image data in order to reduce the encryption/decryption time and the
computational complexity of processing the huge image data. This technique is applying a compression algorithm based on Discrete Cosine Transform (DCT). Two approaches are implemented based on color space conversion as a preprocessing for the compression phases YCbCr and RGB, where the resultant compressed sequence is selectively encrypted using randomly generated combined secret key.
The results showed a significant reduct
The emphasis of Master Production Scheduling (MPS) or tactic planning is on time and spatial disintegration of the cumulative planning targets and forecasts, along with the provision and forecast of the required resources. This procedure eventually becomes considerably difficult and slow as the number of resources, products and periods considered increases. A number of studies have been carried out to understand these impediments and formulate algorithms to optimise the production planning problem, or more specifically the master production scheduling (MPS) problem. These algorithms include an Evolutionary Algorithm called Genetic Algorithm, a Swarm Intelligence methodology called Gravitational Search Algorithm (GSA), Bat Algorithm (BAT), T
... Show MoreThis paper include the problem of segmenting an image into regions represent (objects), segment this object by define boundary between two regions using a connected component labeling. Then develop an efficient segmentation algorithm based on this method, to apply the algorithm to image segmentation using different kinds of images, this algorithm consist four steps at the first step convert the image gray level the are applied on the image, these images then in the second step convert to binary image, edge detection using Canny edge detection in third Are applie the final step is images. Best segmentation rates are (90%) obtained when using the developed algorithm compared with (77%) which are obtained using (ccl) before enhancement.
A remarkable correlation between chaotic systems and cryptography has been established with sensitivity to initial states, unpredictability, and complex behaviors. In one development, stages of a chaotic stream cipher are applied to a discrete chaotic dynamic system for the generation of pseudorandom bits. Some of these generators are based on 1D chaotic map and others on 2D ones. In the current study, a pseudorandom bit generator (PRBG) based on a new 2D chaotic logistic map is proposed that runs side-by-side and commences from random independent initial states. The structure of the proposed model consists of the three components of a mouse input device, the proposed 2D chaotic system, and an initial permutation (IP) table. Statist
... Show More