Estimations of average crash density as a function of traffic elements and characteristics can be used for making good decisions relating to planning, designing, operating, and maintaining roadway networks. This study describes the relationships between total, collision, turnover, and runover accident densities with factors such as hourly traffic flow and average spot speed on multilane rural highways in Iraq. The study is based on data collected from two sources: police stations and traffic surveys. Three highways are selected in Wassit governorate as a case study to cover the studied locations of the accidents. Three highways are selected in Wassit governorate as a case study to cover the studied locations of the accidents. The selection includes Kut–Suwera, Kut–ShekhSaad, and Kut–Hay multilane divided highways located in the south of Iraq. The preliminary presentation of the studied highways was performed using Geographic Information System (GIS) software. Data collection was done to obtain crash numbers and types over five years with their locations, hourly traffic flow, and average spot speed and define roadway segments lengths of crash locations. The cumulative speed distribution curves introduce that the spot speed spectrum for each highway's whole traffic extends over a relatively wide range, indicating a maximum speed of 180 kph and a minimum speed of 30 kph. Multiple linear regression analysis is applied to the data using SPSS software to attain the relationships between the dependent variables and the independent variables to identify elements strongly correlated with crash densities. Four regression models are developed which verify good and strong statistical relationships between crash densities with the studied factors. The results show that traffic volume and driving speed have a significant impact on the crash densities. It means that there is a positive correlation between the single factors and crash occurrence. The higher volumes and the faster the driving speed, the more likely it is to crash. As the hourly traffic flow of automobile grows, the need for safe traffic facilities also extended.
A space X is named a πp – normal if for each closed set F and each π – closed set F’ in X with F ∩ F’ = ∅, there are p – open sets U and V of X with U ∩ V = ∅ whereas F ⊆ U and F’ ⊆ V. Our work studies and discusses a new kind of normality in generalized topological spaces. We define ϑπp – normal, ϑ–mildly normal, & ϑ–almost normal, ϑp– normal, & ϑ–mildly p–normal, & ϑ–almost p-normal and ϑπ-normal space, and we discuss some of their properties.
Long before the pandemic, labour force all over the world was facing the quest of incertitude, which is normal and inherent of the market, but the extent of this quest was shaped by the pace of acceleration of technological progress, which became exponential in the last ten years, from 2010 to 2020. Robotic process automation, work remote, computer science, electronic and communications, mechanical engineering, information technology digitalisation o public administration and so one are ones of the pillars of the future of work. Some authors even stated that without robotic process automation (RPA) included in technological processes, companies will not be able to sustain a competitive level on the market (Madakan et al, 2018). R
... Show MoreThroughout this paper R represents commutative ring with identity and M is a unitary left R-module. The purpose of this paper is to investigate some new results (up to our knowledge) on the concept of weak essential submodules which introduced by Muna A. Ahmed, where a submodule N of an R-module M is called weak essential, if N ? P ? (0) for each nonzero semiprime submodule P of M. In this paper we rewrite this definition in another formula. Some new definitions are introduced and various properties of weak essential submodules are considered.
Interval methods for verified integration of initial value problems (IVPs) for ODEs have been used for more than 40 years. For many classes of IVPs, these methods have the ability to compute guaranteed error bounds for the flow of an ODE, where traditional methods provide only approximations to a solution. Overestimation, however, is a potential drawback of verified methods. For some problems, the computed error bounds become overly pessimistic, or integration even breaks down. The dependency problem and the wrapping effect are particular sources of overestimations in interval computations. Berz (see [1]) and his co-workers have developed Taylor model methods, which extend interval arithmetic with symbolic computations. The latter is an ef
... Show MoreIn this paper is to introduce the concept of hyper AT-algebras is a generalization of AT-algebras and study a hyper structure AT-algebra and investigate some of its properties. “Also, hyper AT-subalgebras and hyper AT-ideal of hyper AT-algebras are studied. We study on the fuzzy theory of hyper AT-ideal of hyper AT-algebras hyper AT-algebra”. “We study homomorphism of hyper AT-algebras which are a common generalization of AT-algebras.
We dealt with the nature of the points under the influence of periodic function chaotic functions associated functions chaotic and sufficient conditions to be a very chaotic functions Palace
The Video effect on Youths Value
Systems on Chips (SoCs) architecture complexity is result of integrating a large numbers of cores in a single chip. The approaches should address the systems particular challenges such as reliability, performance, and power constraints. Monitoring became a necessary part for testing, debugging and performance evaluations of SoCs at run time, as On-chip monitoring is employed to provide environmental information, such as temperature, voltage, and error data. Real-time system validation is done by exploiting the monitoring to determine the proper operation of a system within the designed parameters. The paper explains the common monitoring operations in SoCs, showing the functionality of thermal, voltage and soft error monitors. The different
... Show MoreAudio classification is the process to classify different audio types according to contents. It is implemented in a large variety of real world problems, all classification applications allowed the target subjects to be viewed as a specific type of audio and hence, there is a variety in the audio types and every type has to be treatedcarefully according to its significant properties.Feature extraction is an important process for audio classification. This workintroduces several sets of features according to the type, two types of audio (datasets) were studied. Two different features sets are proposed: (i) firstorder gradient feature vector, and (ii) Local roughness feature vector, the experimentsshowed that the results are competitive to
... Show More