The fouling depositions of crude oil stream were studied theoretically in a shell and tube heat exchanger to investigate the effect of depositions on the heat transfer process. The employed heat exchanger was with steam flowing in the inner tubes and crude oil in the shell at different velocities and bulk temperatures. It is assumed that fouling occurs only on the heated stream side (crude oil). The analysis was carried out for turbulent flow heat transfer conditions with wide range of Reynolds number, bulk temperature and time. Many previously proposed models for fouling resistance were employed to estimate a new model for fouling rate. It is found that the fouling rate and consequently the heat transfer coefficient were affected by Reynolds number, Prandtls number, film temperature, activation energy, and time.
The results obtained showed that fouling resistance decreased with the increasing of Reynolds number and Prandtls number, and increased with the increasing of film temperature and time. The analyses of results were compared with some experimental work and a reasonable agreement is attained.
Background: DVT is a very common problem with a very serious complications like pulmonary embolism (PE) which carries a high mortality,and many other chronic and annoying complications ( like chronic DVT, post-phlebitic syndrome, and chronic venous insufficiency) ,and it has many risk factors that affect its course, severity ,and response to treatment. Objectives: Most of those risk factors are modifiable, and a better understanding of the relationships between them can be beneficial for better assessment for liable pfatients , prevention of disease, and the effectiveness of our treatment modalities. Male to female ratio was nearly equal , so we didn’t discuss the gender among other risk factors. Type of the study:A cross- secti
In this paper, the problem of resource allocation at Al-Raji Company for soft drinks and juices was studied. The company produces several types of tasks to produce juices and soft drinks, which need machines to accomplish these tasks, as it has 6 machines that want to allocate to 4 different tasks to accomplish these tasks. The machines assigned to each task are subject to failure, as these machines are repaired to participate again in the production process. From past records of the company, the probability of failure machines at each task was calculated depending on company data information. Also, the time required for each machine to complete each task was recorded. The aim of this paper is to determine the minimum expected ti
... Show MoreIn this paper, we prove that our proposed localization algorithm named Improved
Accuracy Distribution localization for wireless sensor networks (IADLoc) [1] is the
best when it is compared with the other localization algorithms by introducing many
cases of studies. The IADLoc is used to minimize the error rate of localization
without any additional cost and minimum energy consumption and also
decentralized implementation. The IADLoc is a range free and also range based
localization algorithm that uses both type of antenna (directional and omnidirectional)
it allows sensors to determine their location based on the region of
intersection (ROI) when the beacon nodes send the information to the sink node and
the la
In this paper by using δ-semi.open sets we introduced the concept of weakly δ-semi.normal and δ-semi.normal spaces . Many properties and results were investigated and studied. Also we present the notion of δ- semi.compact spaces and we were able to compare with it δ-semi.regular spaces
In this paper, a method for data encryption was proposed using two secret keys, where the first one is a matrix of XOR's and NOT's gates (XN key), whereas the second key is a binary matrix (KEYB) key. XN and KEYB are (m*n) matrices where m is equal to n. Furthermore this paper proposed a strategy to generate secret keys (KEYBs) using the concept of the LFSR method (Linear Feedback Shift Registers) depending on a secret start point (third secret key s-key). The proposed method will be named as X.K.N. (X.K.N) is a type of symmetric encryption and it will deal with the data as a set of blocks in its preprocessing and then encrypt the binary data in a case of stream cipher.
Abstract
There have been a number of positive developments in inclusive education in many different countries, recognizing that all students, including those with disabilities, have a right to education. Around the world, educators, professionals, and parents are concerned about including children with disabilities in mainstream schools along with their peers. As a result of this trend, a number of factors are contributing, including the increasing importance of education in achieving social justice for pupils with special education needs; the right of individuals with disabilities to attend mainstream schools together with their typically developing peers; the benefit of equal opportunities for everyone in achie
... Show MoreMalware represents one of the dangerous threats to computer security. Dynamic analysis has difficulties in detecting unknown malware. This paper developed an integrated multi – layer detection approach to provide more accuracy in detecting malware. User interface integrated with Virus Total was designed as a first layer which represented a warning system for malware infection, Malware data base within malware samples as a second layer, Cuckoo as a third layer, Bull guard as a fourth layer and IDA pro as a fifth layer. The results showed that the use of fifth layers was better than the use of a single detector without merging. For example, the efficiency of the proposed approach is 100% compared with 18% and 63% of Virus Total and Bel
... Show MoreThis research aims to analyze and simulate biochemical real test data for uncovering the relationships among the tests, and how each of them impacts others. The data were acquired from Iraqi private biochemical laboratory. However, these data have many dimensions with a high rate of null values, and big patient numbers. Then, several experiments have been applied on these data beginning with unsupervised techniques such as hierarchical clustering, and k-means, but the results were not clear. Then the preprocessing step performed, to make the dataset analyzable by supervised techniques such as Linear Discriminant Analysis (LDA), Classification And Regression Tree (CART), Logistic Regression (LR), K-Nearest Neighbor (K-NN), Naïve Bays (NB
... Show MoreIn the United States, the pharmaceutical industry is actively devising strategies to improve the diversity of clinical trial participants. These efforts stem from a plethora of evidence indicating that various ethnic groups respond differently to a given treatment. Thus, increasing the diversity of trial participants would not only provide more robust and representative trial data but also lead to safer and more effective therapies. Further diversifying trial participants appear straightforward, but it is a complex process requiring feedback from multiple stakeholders such as pharmaceutical sponsors, regulators, community leaders, and research sites. Therefore, the objective of this paper is to describe three viable strategies that can p
... Show MorePlagiarism Detection Systems play an important role in revealing instances of a plagiarism act, especially in the educational sector with scientific documents and papers. The idea of plagiarism is that when any content is copied without permission or citation from the author. To detect such activities, it is necessary to have extensive information about plagiarism forms and classes. Thanks to the developed tools and methods it is possible to reveal many types of plagiarism. The development of the Information and Communication Technologies (ICT) and the availability of the online scientific documents lead to the ease of access to these documents. With the availability of many software text editors, plagiarism detections becomes a critical
... Show More