Ground-based active optical sensors (GBAOS) have been successfully used in agriculture to predict crop yield potential (YP) early in the season and to improvise N rates for optimal crop yield. However, the models were found weak or inconsistent due to environmental variation especially rainfall. The objectives of the study were to evaluate if GBAOS could predict YP across multiple locations, soil types, cultivation systems, and rainfall differences. This study was carried from 2011 to 2013 on corn (Zea mays L.) in North Dakota, and in 2017 in potatoes in Maine. Six N rates were used on 50 sites in North Dakota and 12 N rates on two sites, one dryland and one irrigated, in Maine. Two active GBAOS used for this study were GreenSeeker and Holland Scientific Crop Circle Sensor ACS 470 (HSCCACS-470) and 430 (HSCCACS-430). Rainfall data, with or without including crop height, improved the YP models in term of reliability and consistency. The polynomial model was relatively better compared to the exponential model. A significant difference in the relationship between sensor reading multiplied by rainfall data and crop yield was observed in terms of soil type, clay and medium textured, and cultivation system, conventional and no-till, respectively, in the North Dakota corn study. The two potato sites in Maine, irrigated and dryland, performed differently in terms of total yield and rainfall data helped to improve sensor YP models. In conclusion, this study strongly advocates the use of rainfall data while using sensor-based N calculator algorithms.
Getting knowledge from raw data has delivered beneficial information in several domains. The prevalent utilizing of social media produced extraordinary quantities of social information. Simply, social media delivers an available podium for employers for sharing information. Data Mining has ability to present applicable designs that can be useful for employers, commercial, and customers. Data of social media are strident, massive, formless, and dynamic in the natural case, so modern encounters grow. Investigation methods of data mining utilized via social networks is the purpose of the study, accepting investigation plans on the basis of criteria, and by selecting a number of papers to serve as the foundation for this arti
... Show MoreIn recent years, data centre (DC) networks have improved their rapid exchanging abilities. Software-defined networking (SDN) is presented to alternate the impression of conventional networks by segregating the control plane from the SDN data plane. The SDN presented overcomes the limitations of traditional DC networks caused by the rapidly incrementing amounts of apps, websites, data storage needs, etc. Software-defined networking data centres (SDN-DC), based on the open-flow (OF) protocol, are used to achieve superior behaviour for executing traffic load-balancing (LB) jobs. The LB function divides the traffic-flow demands between the end devices to avoid links congestion. In short, SDN is proposed to manage more operative configur
... Show MorePortable devices such as smartphones, tablet PCs, and PDAs are a useful combination of hardware and software turned toward the mobile workers. While they present the ability to review documents, communicate via electronic mail, appointments management, meetings, etc. They usually lack a variety of essential security features. To address the security concerns of sensitive data, many individuals and organizations, knowing the associated threats mitigate them through improving authentication of users, encryption of content, protection from malware, firewalls, intrusion prevention, etc. However, no standards have been developed yet to determine whether such mobile data management systems adequately provide the fu
... Show MoreThe non static chain is always the problem of static analysis so that explained some of theoretical work, the properties of statistical regression analysis to lose when using strings in statistic and gives the slope of an imaginary relation under consideration. chain is not static can become static by adding variable time to the multivariate analysis the factors to remove the general trend as well as variable placebo seasons to remove the effect of seasonal .convert the data to form exponential or logarithmic , in addition to using the difference repeated d is said in this case it integrated class d. Where the research contained in the theoretical side in parts in the first part the research methodology ha
... Show MoreThis research aims to analyze and simulate biochemical real test data for uncovering the relationships among the tests, and how each of them impacts others. The data were acquired from Iraqi private biochemical laboratory. However, these data have many dimensions with a high rate of null values, and big patient numbers. Then, several experiments have been applied on these data beginning with unsupervised techniques such as hierarchical clustering, and k-means, but the results were not clear. Then the preprocessing step performed, to make the dataset analyzable by supervised techniques such as Linear Discriminant Analysis (LDA), Classification And Regression Tree (CART), Logistic Regression (LR), K-Nearest Neighbor (K-NN), Naïve Bays (NB
... Show MoreIn the current paradigms of information technology, cloud computing is the most essential kind of computer service. It satisfies the need for high-volume customers, flexible computing capabilities for a range of applications like as database archiving and business analytics, and the requirement for extra computer resources to provide a financial value for cloud providers. The purpose of this investigation is to assess the viability of doing data audits remotely inside a cloud computing setting. There includes discussion of the theory behind cloud computing and distributed storage systems, as well as the method of remote data auditing. In this research, it is mentioned to safeguard the data that is outsourced and stored in cloud serv
... Show MoreAbstract
The logistic regression model is one of the nonlinear models that aims at obtaining highly efficient capabilities, It also the researcher an idea of the effect of the explanatory variable on the binary response variable. &nb
... Show MoreThe accuracy of the Moment Method for imposing no-slip boundary conditions in the lattice Boltzmann algorithm is investigated numerically using lid-driven cavity flow. Boundary conditions are imposed directly upon the hydrodynamic moments of the lattice Boltzmann equations, rather than the distribution functions, to ensure the constraints are satisfied precisely at grid points. Both single and multiple relaxation time models are applied. The results are in excellent agreement with data obtained from state-of-the-art numerical methods and are shown to converge with second order accuracy in grid spacing.
In this paper, we are mainly concerned with estimating cascade reliability model (2+1) based on inverted exponential distribution and comparing among the estimation methods that are used . The maximum likelihood estimator and uniformly minimum variance unbiased estimators are used to get of the strengths and the stress ;k=1,2,3 respectively then, by using the unbiased estimators, we propose Preliminary test single stage shrinkage (PTSSS) estimator when a prior knowledge is available for the scale parameter as initial value due past experiences . The Mean Squared Error [MSE] for the proposed estimator is derived to compare among the methods. Numerical results about conduct of the considered
... Show More