Applying 4K, (Ultra HD) Real-time video streaming via the internet network, with low bitrate and low latency, is the challenge this paper addresses. Compression technology and transfer links are the important elements that influence video quality. So, to deliver video over the internet or another fixed capacity medium, it is essential to compress the video to more controllable bitrates (customarily in the 1-20 Mbps range). In this study, the video quality is examined using the H.265/HEVC compression standard, and the relationship between quality of video and bitrate flow is investigated using various constant rate factors, GOP patterns, quantization parameters, RC-lookahead, and other types of video motion sequences. The ultra-high-definition video source is used, down sampled and encoded at multiple resolutions of (3480x2160), (1920x1080), (1280x720), (704x576), (352x288), and (176x144). To determine the best H265 feature configuration for each resolution experiments were conducted that resulted in a PSNR of 36 dB at the specified bitrate. The resolution is selected by delivery (encoder resource) based on the end-user application. While video streaming adapted to the available bandwidth is achieved via embedding a controller with MPEG DASH protocol at the client-side. Video streaming Adaptation methods allow the delivery of content that is encoded at different representations of video quality and bitrate and then dividing each representation into chunks of time. Through this paper, we propose to utilize HTTP/2 as a protocol to achieve low latency video streaming focusing on live streaming video avoiding the problem of HTTP/1.
A security system can be defined as a method of providing a form of protection to any type of data. A sequential process must be performed in most of the security systems in order to achieve good protection. Authentication can be defined as a part of such sequential processes, which is utilized in order to verify the user permission to entree and utilize the system. There are several kinds of methods utilized, including knowledge, and biometric features. The electroencephalograph (EEG) signal is one of the most widely signal used in the bioinformatics field. EEG has five major wave patterns, which are Delta, Theta, Alpha, Beta and Gamma. Every wave has five features which are amplitude, wavelength, period, speed and frequency. The linear
... Show MoreImage recognition is one of the most important applications of information processing, in this paper; a comparison between 3-level techniques based image recognition has been achieved, using discrete wavelet (DWT) and stationary wavelet transforms (SWT), stationary-stationary-stationary (sss), stationary-stationary-wavelet (ssw), stationary-wavelet-stationary (sws), stationary-wavelet-wavelet (sww), wavelet-stationary- stationary (wss), wavelet-stationary-wavelet (wsw), wavelet-wavelet-stationary (wws) and wavelet-wavelet-wavelet (www). A comparison between these techniques has been implemented. according to the peak signal to noise ratio (PSNR), root mean square error (RMSE), compression ratio (CR) and the coding noise e (n) of each third
... Show MoreThe agent-based modeling is currently utilized extensively to analyze complex systems. It supported such growth, because it was able to convey distinct levels of interaction in a complex detailed environment. Meanwhile, agent-based models incline to be progressively complex. Thus, powerful modeling and simulation techniques are needed to address this rise in complexity. In recent years, a number of platforms for developing agent-based models have been developed. Actually, in most of the agents, often discrete representation of the environment, and one level of interaction are presented, where two or three are regarded hardly in various agent-based models. The key issue is that modellers work in these areas is not assisted by simulation plat
... Show MoreAdvances in digital technology and the World Wide Web has led to the increase of digital documents that are used for various purposes such as publishing and digital library. This phenomenon raises awareness for the requirement of effective techniques that can help during the search and retrieval of text. One of the most needed tasks is clustering, which categorizes documents automatically into meaningful groups. Clustering is an important task in data mining and machine learning. The accuracy of clustering depends tightly on the selection of the text representation method. Traditional methods of text representation model documents as bags of words using term-frequency index document frequency (TFIDF). This method ignores the relationship an
... Show MoreFuzzy Based Clustering for Grayscale Image Steganalysis
The feature extraction step plays major role for proper object classification and recognition, this step depends mainly on correct object detection in the given scene, the object detection algorithms may result with some noises that affect the final object shape, a novel approach is introduced in this paper for filling the holes in that object for better object detection and for correct feature extraction, this method is based on the hole definition which is the black pixel surrounded by a connected boundary region, and hence trying to find a connected contour region that surrounds the background pixel using roadmap racing algorithm, the method shows a good results in 2D space objects.
Keywords: object filling, object detection, objec
The research aims to identify the importance of using the style of the cost on the basis of activity -oriented in time TDABC and its role in determining the cost of products more equitably and thus its impact on the policy of allocation of resources through the reverse of the changes that occur on an ongoing basis in the specification of the products and thus the change in the nature and type of operations . The research was conducted at the General Company for Textile Industries Wasit / knitting socks factory was based on research into the hypothesis main of that ( possible to calculate the cost of activities that cause the production through the time it takes to run these activities can then be re- distributed product cost
... Show MoreThis paper attempts to develop statistical modeling for air-conditioning analysis in Jakarta, Indonesia, during an emergency state of community activity restrictions enforcement (Emergency CARE), using a variety of parameters such as PM10, PM2.5, SO2, CO, O3, and NO2 from five IoT-based air monitoring systems. The parameters mentioned above are critical for assessing the air quality conditions and concentration of air pollutants. Outdoor air pollution concentration variations before and after the Emergency CARE, which was held in Indonesia during the COVID-19 pandemic on July 3-21, 2021, were studied. An air quality monitoring system based on the IoT generates sensor data
... Show More