The behavior and shear strength of full-scale (T-section) reinforced concrete deep beams, designed according to the strut-and-tie approach of ACI Code-19 specifications, with various large web openings were investigated in this paper. A total of 7 deep beam specimens with identical shear span-to-depth ratios have been tested under mid-span concentrated load applied monotonically until beam failure. The main variables studied were the effects of width and depth of the web openings on deep beam performance. Experimental data results were calibrated with the strut-and-tie approach, adopted by ACI 318-19 code for the design of deep beams. The provided strut-and-tie design model in ACI 318-19 code provision was assessed and found to be unsatisfactory for deep beams with large web openings. A simplified empirical equation to estimate the shear strength for deep T-beams with large web openings based on the strut-and-tie model was proposed and verified with numerical analysis. The numerical study considered three-dimensional finite element models, in ABAQUS software, that have been developed to simulate and predict the performance of deep beams. The results of numerical simulations were in good agreement and exhibited close correlation with the experimental data. The test results showed that the enlargement in the size of web openings substantially reduces the elements' shear capacity. The experiments revealed that increasing the width of the openings has more effect than the depth at reducing the load-carrying capacity.
Image databases are increasing exponentially because of rapid developments in social networking and digital technologies. To search these databases, an efficient search technique is required. CBIR is considered one of these techniques. This paper presents a multistage CBIR to address the computational cost issues while reasonably preserving accuracy. In the presented work, the first stage acts as a filter that passes images to the next stage based on SKTP, which is the first time used in the CBIR domain. While in the second stage, LBP and Canny edge detectors are employed for extracting texture and shape features from the query image and images in the newly constructed database. The p
Computer systems and networks are being used in almost every aspect of our daily life, the security threats to computers and networks have increased significantly. Usually, password-based user authentication is used to authenticate the legitimate user. However, this method has many gaps such as password sharing, brute force attack, dictionary attack and guessing. Keystroke dynamics is one of the famous and inexpensive behavioral biometric technologies, which authenticate a user based on the analysis of his/her typing rhythm. In this way, intrusion becomes more difficult because the password as well as the typing speed must match with the correct keystroke patterns. This thesis considers static keystroke dynamics as a transparent layer of t
... Show MoreAny software application can be divided into four distinct interconnected domains namely, problem domain, usage domain, development domain and system domain. A methodology for assistive technology software development is presented here that seeks to provide a framework for requirements elicitation studies together with their subsequent mapping implementing use-case driven object-oriented analysis for component based software architectures. Early feedback on user interface components effectiveness is adopted through process usability evaluation. A model is suggested that consists of the three environments; problem, conceptual, and representational environments or worlds. This model aims to emphasize on the relationship between the objects
... Show MoreHM Al-Dabbas, RA Azeez, AE Ali, IRAQI JOURNAL OF COMPUTERS, COMMUNICATIONS, CONTROL AND SYSTEMS ENGINEERING, 2023
Color image compression is a good way to encode digital images by decreasing the number of bits wanted to supply the image. The main objective is to reduce storage space, reduce transportation costs and maintain good quality. In current research work, a simple effective methodology is proposed for the purpose of compressing color art digital images and obtaining a low bit rate by compressing the matrix resulting from the scalar quantization process (reducing the number of bits from 24 to 8 bits) using displacement coding and then compressing the remainder using the Mabel ZF algorithm Welch LZW. The proposed methodology maintains the quality of the reconstructed image. Macroscopic and
The cuneiform images need many processes in order to know their contents
and by using image enhancement to clarify the objects (symbols) founded in the
image. The Vector used for classifying the symbol called symbol structural vector
(SSV) it which is build from the information wedges in the symbol.
The experimental tests show insome numbersand various relevancy including
various drawings in online method. The results are high accuracy in this research,
and methods and algorithms programmed using a visual basic 6.0. In this research
more than one method was applied to extract information from the digital images
of cuneiform tablets, in order to identify most of signs of Sumerian cuneiform.
Estimating the semantic similarity between short texts plays an increasingly prominent role in many fields related to text mining and natural language processing applications, especially with the large increase in the volume of textual data that is produced daily. Traditional approaches for calculating the degree of similarity between two texts, based on the words they share, do not perform well with short texts because two similar texts may be written in different terms by employing synonyms. As a result, short texts should be semantically compared. In this paper, a semantic similarity measurement method between texts is presented which combines knowledge-based and corpus-based semantic information to build a semantic network that repre
... Show More