At the level of both individuals and companies, Wireless Sensor Networks (WSNs) get a wide range of applications and uses. Sensors are used in a wide range of industries, including agriculture, transportation, health, and many more. Many technologies, such as wireless communication protocols, the Internet of Things, cloud computing, mobile computing, and other emerging technologies, are connected to the usage of sensors. In many circumstances, this contact necessitates the transmission of crucial data, necessitating the need to protect that data from potential threats. However, as the WSN components often have constrained computation and power capabilities, protecting the communication in WSNs comes at a significant performance penalty. Due to the massive calculations required by conventional public-key and secret encryption methods, information security in this limited context calls for light encryption techniques. In many applications involving sensor networks, security is a crucial concern. On the basis of traditional cryptography, a number of security procedures are created for wireless sensor networks. Some symmetric-key encryption techniques used in sensor network setups include AES, RC5, SkipJack, and XXTEA. These algorithms do, however, have several flaws of their own, including being susceptible to chosen-plaintext assault, brute force attack, and computational complexity.
Brachytherapy treatment is primarily used for the certain handling kinds of cancerous tumors. Using radionuclides for the study of tumors has been studied for a very long time, but the introduction of mathematical models or radiobiological models has made treatment planning easy. Using mathematical models helps to compute the survival probabilities of irradiated tissues and cancer cells. With the expansion of using HDR-High dose rate Brachytherapy and LDR-low dose rate Brachytherapy for the treatment of cancer, it requires fractionated does treatment plan to irradiate the tumor. In this paper, authors have discussed dose calculation algorithms that are used in Brachytherapy treatment planning. Precise and less time-consuming calculations
... Show MoreIn this paper, a new hybrid algorithm for linear programming model based on Aggregate production planning problems is proposed. The new hybrid algorithm of a simulated annealing (SA) and particle swarm optimization (PSO) algorithms. PSO algorithm employed for a good balance between exploration and exploitation in SA in order to be effective and efficient (speed and quality) for solving linear programming model. Finding results show that the proposed approach is achieving within a reasonable computational time comparing with PSO and SA algorithms.
In this paper, the effective computational method (ECM) based on the standard monomial polynomial has been implemented to solve the nonlinear Jeffery-Hamel flow problem. Moreover, novel effective computational methods have been developed and suggested in this study by suitable base functions, namely Chebyshev, Bernstein, Legendre, and Hermite polynomials. The utilization of the base functions converts the nonlinear problem to a nonlinear algebraic system of equations, which is then resolved using the Mathematica®12 program. The development of effective computational methods (D-ECM) has been applied to solve the nonlinear Jeffery-Hamel flow problem, then a comparison between the methods has been shown. Furthermore, the maximum
... Show MoreBased on a finite element analysis using Matlab coding, eigenvalue problem has been formulated and solved for the buckling analysis of non-prismatic columns. Different numbers of elements per column length have been used to assess the rate of convergence for the model. Then the proposed model has been used to determine the critical buckling load factor () for the idealized supported columns based on the comparison of their buckling loads with the corresponding hinge supported columns . Finally in this study the critical buckling factor () under end force (P) increases by about 3.71% with the tapered ratio increment of 10% for different end supported columns and the relationship between normalized critical load and slenderness ratio was g
... Show Moresensor sampling rate (SSR) may be an effective and crucial field in networked control systems. Changing sensor sampling period after designing the networked control system is a critical matter for the stability of the system. In this article, a wireless networked control system with multi-rate sensor sampling is proposed to control the temperature of a multi-zone greenhouse. Here, a behavior based Mamdany fuzzy system is used in three approaches, first is to design the fuzzy temperature controller, second is to design a fuzzy gain selector and third is to design a fuzzy error handler. The main approach of the control system design is to control the input gain of the fuzzy temperature controller depending on the cur
... Show MoreIs to obtain competitive advantage legitimate objective pursued by all organizations to achieve, because they live today in environments of rapid change and dynamic in order to meet the demands of the customer changing as well as intense competition between the organizations, which requires them to get the location of competitive markets in order to do this will remain to do the building and strengthening competitive advantage to be able to achieve, but that this feature is not easy and is not only through the identification and use of a successful strategy for a competitive standard and then manage it successfully. Hence the research problem of determining the sources of differentiation strategy and its impact on the dimensions of compe
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for