Peak ground acceleration (PGA) is one of the critical factors that affect the determination of earthquake intensity. PGA is generally utilized to describe ground motion in a particular zone and is able to efficiently predict the parameters of site ground motion for the design of engineering structures. Therefore, novel models are developed to forecast PGA in the case of the Iraqi database, which utilizes the particle swarm optimization (PSO) approach. A data set of 187 historical ground-motion recordings in Iraq’s tectonic regions was used to build the explicit proposed models. The proposed PGA models relate to different seismic parameters, including the magnitude of the earthquake (Mw), average shear-wave velocity (VS30), focal depth (FD), and nearest epicenter distance (REPi) to a seismic station. The derived PGA models are remarkably simple and straightforward and can be used reliably for pre-design purposes. The proposed PGA models (i.e., models I and II) obtained via the explicit formula produced using the PSO method are highly correlated to the actual PGA records owing to low coefficients of variation (CoV) of approximately 2.12% and 2.06%, and mean values (i.e., close to 1.0) of approximately 1.005 and 1.004. Lastly, high-frequency, low absolute relative error (ARE), which is below 5%, is recorded for the proposed models, thereby showing an acceptable error distribution.
Channel estimation and synchronization are considered the most challenging issues in Orthogonal Frequency Division Multiplexing (OFDM) system. OFDM is highly affected by synchronization errors that cause reduction in subcarriers orthogonality, leading to significant performance degradation. The synchronization errors cause two issues: Symbol Time Offset (STO), which produces inter symbol interference (ISI) and Carrier Frequency Offset (CFO), which results in inter carrier interference (ICI). The aim of the research is to simulate Comb type pilot based channel estimation for OFDM system showing the effect of pilot numbers on the channel estimation performance and propose a modified estimation method for STO with less numb
... Show MoreIn this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).
Semantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
A Multiple System Biometric System Based on ECG Data
Password authentication is popular approach to the system security and it is also very important system security procedure to gain access to resources of the user. This paper description password authentication method by using Modify Bidirectional Associative Memory (MBAM) algorithm for both graphical and textual password for more efficient in speed and accuracy. Among 100 test the accuracy result is 100% for graphical and textual password to authenticate a user.
In this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
Visible light communication (VLC) is an upcoming wireless technology for next-generation communication for high-speed data transmission. It has the potential for capacity enhancement due to its characteristic large bandwidth. Concerning signal processing and suitable transceiver design for the VLC application, an amplification-based optical transceiver is proposed in this article. The transmitter consists of a driver and laser diode as the light source, while the receiver contains a photodiode and signal amplifying circuit. The design model is proposed for its simplicity in replacing the trans-impedance and transconductance circuits of the conventional modules by a simple amplification circuit and interface converter. Th
... Show MoreCurrently, with the huge increase in modern communication and network applications, the speed of transformation and storing data in compact forms are pressing issues. Daily an enormous amount of images are stored and shared among people every moment, especially in the social media realm, but unfortunately, even with these marvelous applications, the limited size of sent data is still the main restriction's, where essentially all these applications utilized the well-known Joint Photographic Experts Group (JPEG) standard techniques, in the same way, the need for construction of universally accepted standard compression systems urgently required to play a key role in the immense revolution. This review is concerned with Different
... Show More