This article presents the results of an experimental investigation of using carbon fiber–reinforced polymer sheets to enhance the behavior of reinforced concrete deep beams with large web openings in shear spans. A set of 18 specimens were fabricated and tested up to a failure to evaluate the structural performance in terms of cracking, deformation, and load-carrying capacity. All tested specimens were with 1500-mm length, 500-mm cross-sectional deep, and 150-mm wide. Parameters that studied were opening size, opening location, and the strengthening factor. Two deep beams were implemented as control specimens without opening and without strengthening. Eight deep beams were fabricated with openings but without strengthening, while the other eight deep beams were with openings in shear spans and with carbon fiber–reinforced polymer sheet strengthening around opening zones. The opening size was adopted to be 200 × 200 mm dimensions in eight deep beams, while it was considered to be 230 × 230 mm dimensions in the other eight specimens. In eight specimens the opening was located at the center of the shear span, while in the other eight beams the opening was attached to the interior edge of the shear span. Carbon fiber–reinforced polymer sheets were installed around openings to compensate for the cutout area of concrete. Results gained from the experimental test showed that the creation of openings in shear spans affect the load-carrying capacity, where the reduction of the failure load for specimens with the opening but without strengthening may attain 66% compared to deep beams without openings. On the other hand, the strengthening by carbon fiber–reinforced polymer sheets for beams with openings increased the failure load by 20%–47% compared with the identical deep beam without strengthening. A significant contribution of carbon fiber–reinforced polymer sheets in restricting the deformability of deep beams was observed.
Recognizing speech emotions is an important subject in pattern recognition. This work is about studying the effect of extracting the minimum possible number of features on the speech emotion recognition (SER) system. In this paper, three experiments performed to reach the best way that gives good accuracy. The first one extracting only three features: zero crossing rate (ZCR), mean, and standard deviation (SD) from emotional speech samples, the second one extracting only the first 12 Mel frequency cepstral coefficient (MFCC) features, and the last experiment applying feature fusion between the mentioned features. In all experiments, the features are classified using five types of classification techniques, which are the Random Forest (RF),
... Show MoreMany tools and techniques have been recently adopted to develop construction materials that are less harmful and friendlier to the environment. New products can be achieved through the recycling of waste material. Thus, this study aims to use recycled glass bottles as sustainable materials.
Our challenge is to use nano glass powder by the addition or replacement of the weight of the cement for producing concrete with enhanced strength.
A nano recycled glass p
Bioinformatics is one of the computer science and biology sub-subjects concerned with the processes applied to biological data, such as gathering, processing, storing, and analyzing it. Biological data (ribonucleic acid (RNA), deoxyribonucleic acid (DNA), and protein sequences) has many applications and uses in many fields (data security, data segmentation, feature extraction, etc.). DNA sequences are used in the cryptography field, using the properties of biomolecules as the carriers of the data. Messenger RNA (mRNA) is a single strand used to make proteins containing genetic information. The information recorded from DNA also carries messages from DNA to ribosomes in the cytosol. In this paper, a new encryption technique bas
... Show MoreSecure data communication across networks is always threatened with intrusion and abuse. Network Intrusion Detection System (IDS) is a valuable tool for in-depth defense of computer networks. Most research and applications in the field of intrusion detection systems was built based on analysing the several datasets that contain the attacks types using the classification of batch learning machine. The present study presents the intrusion detection system based on Data Stream Classification. Several data stream algorithms were applied on CICIDS2017 datasets which contain several new types of attacks. The results were evaluated to choose the best algorithm that satisfies high accuracy and low computation time.
Document analysis of images snapped by camera is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text; this makes automatic analysis complicated. OCR is one of the image processing techniques which is used to perform automatic identification of texts. Existing image processing techniques need to manage many parameters in order to clearly recognize the text in such pictures. Segmentation is regarded one of these essential parameters. This paper discusses the accuracy of segmentation process and its effect over the recognition process. According to the proposed method, the images were firstly filtered using the wiener filter then the active contour algorithm could b
... Show MoreFraud Includes acts involving the exercise of deception by multiple parties inside and outside companies in order to obtain economic benefits against the harm to those companies, as they are to commit fraud upon the availability of three factors which represented by the existence of opportunities, motivation, and rationalization. Fraud detecting require necessity of indications the possibility of its existence. Here, Benford’s law can play an important role in direct the light towards the possibility of the existence of financial fraud in the accounting records of the company, which provides the required effort and time for detect fraud and prevent it.
This work addressed the assignment problem (AP) based on fuzzy costs, where the objective, in this study, is to minimize the cost. A triangular, or trapezoidal, fuzzy numbers were assigned for each fuzzy cost. In addition, the assignment models were applied on linguistic variables which were initially converted to quantitative fuzzy data by using the Yager’sorankingi method. The paper results have showed that the quantitative date have a considerable effect when considered in fuzzy-mathematic models.
Keywords provide the reader with a summary of the contents of the document and play a significant role in information retrieval systems, especially in search engine optimization and bibliographic databases. Furthermore keywords help to classify the document into the related topic. Keywords extraction included manual extracting depends on the content of the document or article and the judgment of its author. Manual extracting of keywords is costly, consumes effort and time, and error probability. In this research an automatic Arabic keywords extraction model based on deep learning algorithms is proposed. The model consists of three main steps: preprocessing, feature extraction and classification to classify the document
... Show MoreVisual media is a better way to deliver the information than the old way of "reading". For that reason with the wide propagation of multimedia websites, there are large video library’s archives, which came to be a main resource for humans. This research puts its eyes on the existing development in applying classical phrase search methods to a linked vocal transcript and after that it retrieves the video, this an easier way to search any visual media. This system has been implemented using JSP and Java language for searching the speech in the videos