The behavior and shear strength of full-scale (T-section) reinforced concrete deep beams, designed according to the strut-and-tie approach of ACI Code-19 specifications, with various large web openings were investigated in this paper. A total of 7 deep beam specimens with identical shear span-to-depth ratios have been tested under mid-span concentrated load applied monotonically until beam failure. The main variables studied were the effects of width and depth of the web openings on deep beam performance. Experimental data results were calibrated with the strut-and-tie approach, adopted by ACI 318-19 code for the design of deep beams. The provided strut-and-tie design model in ACI 318-19 code provision was assessed and found to be u
... Show MoreBackground: Obesity tends to appear in modern societies and constitutes a significant public health problem with an increased risk of cardiovascular diseases.
Objective: This study aims to determine the agreement between actual and perceived body image in the general population.
Methods: A descriptive cross-sectional study design was conducted with a sample size of 300. The data were collected from eight major populated areas of Northern district of Karachi Sindh with a period of six months (10th January 2020 to 21st June 2020). The Figure rating questionnaire scale (FRS) was applied to collect the demographic data and perception about body weight. Body mass index (BMI) used for ass
... Show MoreThe Internet image retrieval is an interesting task that needs efforts from image processing and relationship structure analysis. In this paper, has been proposed compressed method when you need to send more than a photo via the internet based on image retrieval. First, face detection is implemented based on local binary patterns. The background is notice based on matching global self-similarities and compared it with the rest of the image backgrounds. The propose algorithm are link the gap between the present image indexing technology, developed in the pixel domain, and the fact that an increasing number of images stored on the computer are previously compressed by JPEG at the source. The similar images are found and send a few images inst
... Show MoreToday in the digital realm, where images constitute the massive resource of the social media base but unfortunately suffer from two issues of size and transmission, compression is the ideal solution. Pixel base techniques are one of the modern spatially optimized modeling techniques of deterministic and probabilistic bases that imply mean, index, and residual. This paper introduces adaptive pixel-based coding techniques for the probabilistic part of a lossy scheme by incorporating the MMSA of the C321 base along with the utilization of the deterministic part losslessly. The tested results achieved higher size reduction performance compared to the traditional pixel-based techniques and the standard JPEG by about 40% and 50%,
... Show MoreThe growing use of tele
This paper presents a new secret diffusion scheme called Round Key Permutation (RKP) based on the nonlinear, dynamic and pseudorandom permutation for encrypting images by block, since images are considered particular data because of their size and their information, which are two-dimensional nature and characterized by high redundancy and strong correlation. Firstly, the permutation table is calculated according to the master key and sub-keys. Secondly, scrambling pixels for each block to be encrypted will be done according the permutation table. Thereafter the AES encryption algorithm is used in the proposed cryptosystem by replacing the linear permutation of ShiftRows step with the nonlinear and secret pe
... Show MoreThe region-based association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease. Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls. To tackle the problem of the sparse distribution, a two-stage approach was proposed in literature: In the first stage, haplotypes are computationally inferred from genotypes, followed by a haplotype coclassification. In the second stage, the association analysis is performed on the inferred haplotype groups. If a haplotype is unevenly distributed between the case and control samples, this haplotype is labeled
... Show MoreIt is frequently asserted that an advantage of a binary search tree implementation of a set over linked list implementation is that for reasonably well balanced binary search trees the average search time (to discover whether or not a particular element is present in the set) is O(log N) to the base 2 where N is the number of element in the set (the size of the tree). This paper presents an experiment for measuring and comparing the obtained binary search tree time with the expected time (theoretical), this experiment proved the correctness of the hypothesis, the experiment is carried out using a program in turbo Pascal with recursion technique implementation and a statistical method to prove th
... Show MoreThe issue of increasing the range covered by a wireless sensor network with restricted sensors is addressed utilizing improved CS employing the PSO algorithm and opposition-based learning (ICS-PSO-OBL). At first, the iteration is carried out by updating the old solution dimension by dimension to achieve independent updating across the dimensions in the high-dimensional optimization problem. The PSO operator is then incorporated to lessen the preference random walk stage's imbalance between exploration and exploitation ability. Exceptional individuals are selected from the population using OBL to boost the chance of finding the optimal solution based on the fitness value. The ICS-PSO-OBL is used to maximize coverage in WSN by converting r
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for