Applying 4K, (Ultra HD) Real-time video streaming via the internet network, with low bitrate and low latency, is the challenge this paper addresses. Compression technology and transfer links are the important elements that influence video quality. So, to deliver video over the internet or another fixed capacity medium, it is essential to compress the video to more controllable bitrates (customarily in the 1-20 Mbps range). In this study, the video quality is examined using the H.265/HEVC compression standard, and the relationship between quality of video and bitrate flow is investigated using various constant rate factors, GOP patterns, quantization parameters, RC-lookahead, and other types of video motion sequences. The ultra-high-definition video source is used, down sampled and encoded at multiple resolutions of (3480x2160), (1920x1080), (1280x720), (704x576), (352x288), and (176x144). To determine the best H265 feature configuration for each resolution experiments were conducted that resulted in a PSNR of 36 dB at the specified bitrate. The resolution is selected by delivery (encoder resource) based on the end-user application. While video streaming adapted to the available bandwidth is achieved via embedding a controller with MPEG DASH protocol at the client-side. Video streaming Adaptation methods allow the delivery of content that is encoded at different representations of video quality and bitrate and then dividing each representation into chunks of time. Through this paper, we propose to utilize HTTP/2 as a protocol to achieve low latency video streaming focusing on live streaming video avoiding the problem of HTTP/1.
Abstract
The study aims to build a training program based on the Connectivism Theory to develop e-learning competencies for Islamic education teachers in the Governorate of Dhofar, as well as to identify its effectiveness. The study sample consisted of (30) Islamic education teachers to implement the training program, they were randomly selected. The study used the descriptive approach to determine the electronic competencies and build the training program, and the quasi-experimental approach to determine the effectiveness of the program. The study tools were the cognitive achievement test and the observation card, which were applied before and after. The study found that the effectiveness of the training program
... Show MoreSemantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
Currently, with the huge increase in modern communication and network applications, the speed of transformation and storing data in compact forms are pressing issues. Daily an enormous amount of images are stored and shared among people every moment, especially in the social media realm, but unfortunately, even with these marvelous applications, the limited size of sent data is still the main restriction's, where essentially all these applications utilized the well-known Joint Photographic Experts Group (JPEG) standard techniques, in the same way, the need for construction of universally accepted standard compression systems urgently required to play a key role in the immense revolution. This review is concerned with Different
... Show MoreAccurate emotion categorization is an important and challenging task in computer vision and image processing fields. Facial emotion recognition system implies three important stages: Prep-processing and face area allocation, feature extraction and classification. In this study a new system based on geometric features (distances and angles) set derived from the basic facial components such as eyes, eyebrows and mouth using analytical geometry calculations. For classification stage feed forward neural network classifier is used. For evaluation purpose the Standard database "JAFFE" have been used as test material; it holds face samples for seven basic emotions. The results of conducted tests indicate that the use of suggested distances, angles
... Show MoreRecently personal recommender system has spread fast, because of its role in helping users to make their decision. Location-based recommender systems are one of these systems. These systems are working by sensing the location of the person and suggest the best services to him in his area. Unfortunately, these systems that depend on explicit user rating suffering from cold start and sparsity problems. The proposed system depends on the current user position to recommend a hotel to him, and on reviews analysis. The hybrid sentiment analyzer consists of supervised sentiment analyzer and the second stage is lexicon sentiment analyzer. This system has a contribute over the sentiment analyzer by extracting the aspects that users have been ment
... Show MoreVisible light communication (VLC) is an upcoming wireless technology for next-generation communication for high-speed data transmission. It has the potential for capacity enhancement due to its characteristic large bandwidth. Concerning signal processing and suitable transceiver design for the VLC application, an amplification-based optical transceiver is proposed in this article. The transmitter consists of a driver and laser diode as the light source, while the receiver contains a photodiode and signal amplifying circuit. The design model is proposed for its simplicity in replacing the trans-impedance and transconductance circuits of the conventional modules by a simple amplification circuit and interface converter. Th
... Show MoreSteganography involves concealing information by embedding data within cover media and it can be categorized into two main domains: spatial and frequency. This paper presents two distinct methods. The first is operating in the spatial domain which utilizes the least significant bits (LSBs) to conceal a secret message. The second method is the functioning in the frequency domain which hides the secret message within the LSBs of the middle-frequency band of the discrete cosine transform (DCT) coefficients. These methods enhance obfuscation by utilizing two layers of randomness: random pixel embedding and random bit embedding within each pixel. Unlike other available methods that embed data in sequential order with a fixed amount.
... Show More