Mersing is one of the places that have the potential for wind power development in Malaysia. Researchers often suggest it as an ideal place for generating electricity from wind power. However, before a location is chosen, several factors need to be considered. By analyzing the location ahead of time, resource waste can be avoided and maximum profitability to various parties can be realized. For this study, the focus is to identify the distribution of the wind speed of Mersing and to determine the optimal average of wind speed. This study is critical because the wind speed data for any region has its distribution. It changes daily and by season. Moreover, no determination has been made regarding selecting the average wind speed used for wind studies. The wind speed data is averaged to 1, 10, 30, and 60 minutes and used to find the optimal wind speed average. This study used Kolmogorov-Smirnov and Chi-Square as the goodness of fit. The finding shows that the wind speed distribution in Mersing varies according to the time average used and the best fit distribution is Gen. Gamma. In contrast, the optimal average wind speed is 10 minutes due to the highest similarity results with 1-minute data. These affect the reliability of the finding, accuracy of the estimation and decisions made. Therefore, the implementation of this study is significant so that the wind distribution in a particular area is more accurate.
Empirical equations for estimating thickening time and compressive strength of bentonitic - class "G" cement slurries were derived as a function of water to cement ratio and apparent viscosity (for any ratios). How the presence of such an equations easily extract the thickening time and compressive strength values of the oil field saves time without reference to the untreated control laboratory tests such as pressurized consistometer for thickening time test and Hydraulic Cement Mortars including water bath ( 24 hours ) for compressive strength test those may have more than one day.
The present study aimed to identify the therapeutic evaluation of chitosan extracted from the fungus cushroom and pure chitosan on glucose and lipid profile in the blood of 35 male rabbits with hyperlipidemia induced experimentally by cholesterol. The tests included estimation of glucose levels, total cholesterol, triglycerides, high-density lipoproteins, low-density lipoproteins, and very low-density lipoproteins. hyperlipidemia was induced in the male rabbits used in the study which was administered orally with cholesterol 150mg/kg body weight for a week. rabbits were divided into seven groups: control, cholesterol, pure chitosan, mushroom chitosan, cholesterol and pure chitosan, cholesterol and mushroom chitosan and cholestero
... Show MoreThe primary function of commercial banks is the process of converting liquid liabilities such as deposits to illiquid assets, (also known as a loan), liquid assets, (aka cash and cash equivalent) in a balanced manner between liquid and illiquid assets, that guaranteed the preservation of the rights of depositors and the bank and not by converting liquid liabilities into liquid assets in a very large percentage. This comes from its role as depository and intermediary institutions between supply and demand, therefore, we find that the high indicators of bank liquidity and solvency may reflect a misleading picture of the status of commercial banks, to some extent in terms of the strength of their balance sheets and
... Show MoreWireless sensor applications are susceptible to energy constraints. Most of the energy is consumed in communication between wireless nodes. Clustering and data aggregation are the two widely used strategies for reducing energy usage and increasing the lifetime of wireless sensor networks. In target tracking applications, large amount of redundant data is produced regularly. Hence, deployment of effective data aggregation schemes is vital to eliminate data redundancy. This work aims to conduct a comparative study of various research approaches that employ clustering techniques for efficiently aggregating data in target tracking applications as selection of an appropriate clustering algorithm may reflect positive results in the data aggregati
... Show MoreVisual analytics becomes an important approach for discovering patterns in big data. As visualization struggles from high dimensionality of data, issues like concept hierarchy on each dimension add more difficulty and make visualization a prohibitive task. Data cube offers multi-perspective aggregated views of large data sets and has important applications in business and many other areas. It has high dimensionality, concept hierarchy, vast number of cells, and comes with special exploration operations such as roll-up, drill-down, slicing and dicing. All these issues make data cubes very difficult to visually explore. Most existing approaches visualize a data cube in 2D space and require preprocessing steps. In this paper, we propose a visu
... Show MoreThis research aims to analyze and simulate biochemical real test data for uncovering the relationships among the tests, and how each of them impacts others. The data were acquired from Iraqi private biochemical laboratory. However, these data have many dimensions with a high rate of null values, and big patient numbers. Then, several experiments have been applied on these data beginning with unsupervised techniques such as hierarchical clustering, and k-means, but the results were not clear. Then the preprocessing step performed, to make the dataset analyzable by supervised techniques such as Linear Discriminant Analysis (LDA), Classification And Regression Tree (CART), Logistic Regression (LR), K-Nearest Neighbor (K-NN), Naïve Bays (NB
... Show MorePortable devices such as smartphones, tablet PCs, and PDAs are a useful combination of hardware and software turned toward the mobile workers. While they present the ability to review documents, communicate via electronic mail, appointments management, meetings, etc. They usually lack a variety of essential security features. To address the security concerns of sensitive data, many individuals and organizations, knowing the associated threats mitigate them through improving authentication of users, encryption of content, protection from malware, firewalls, intrusion prevention, etc. However, no standards have been developed yet to determine whether such mobile data management systems adequately provide the fu
... Show MoreIn recent years, data centre (DC) networks have improved their rapid exchanging abilities. Software-defined networking (SDN) is presented to alternate the impression of conventional networks by segregating the control plane from the SDN data plane. The SDN presented overcomes the limitations of traditional DC networks caused by the rapidly incrementing amounts of apps, websites, data storage needs, etc. Software-defined networking data centres (SDN-DC), based on the open-flow (OF) protocol, are used to achieve superior behaviour for executing traffic load-balancing (LB) jobs. The LB function divides the traffic-flow demands between the end devices to avoid links congestion. In short, SDN is proposed to manage more operative configur
... Show MorePermeability data has major importance work that should be handled in all reservoir simulation studies. The importance of permeability data increases in mature oil and gas fields due to its sensitivity for the requirements of some specific improved recoveries. However, the industry has a huge source of data of air permeability measurements against little number of liquid permeability values. This is due to the relatively high cost of special core analysis.
The current study suggests a correlation to convert air permeability data that are conventionally measured during laboratory core analysis into liquid permeability. This correlation introduces a feasible estimation in cases of data loose and poorly consolidated formations, or in cas