Background: Obesity is becoming the healthcare epidemic world wide.Obesity is associated with reduced life expectancy, increased morbidity and mortality, and greater healthcare costs.Bariatric surgery is the only effective treatment for morbid obesity and is gaining increasing popularity. There has been a steady rise in the numbers and types of bariatric operations done worldwide in recent years butnon of prove to be ideal .Animal studies and use of animal models are significant element in the evolution of medical knowledge and the use of animals as a model for bariatric surgery is of importance to study the mechanisms of these operationsa and also help to develop new technique in management of obesity.Objectives:Study of effects of sleeve gastrectomy as bariatric surgery procedures on weight of dietary induced obese rats (DIO).Methods:Eighteen adult rats with diet induced obesity (DIO) divided into two groups, the first (n=9) group exposed to sleeve gasterectomy (SG)under general anesthesia , the second(n=9) is the sham (control) group. Postoperative care of the animals done as required and the weight of the rats were measured weekly for 6 weeks .Results:. Follow up for 6 weeks post-operative . Four rats from SG group were died: two in the first day , second and 6th post operative day .Postmortem done with evidence of gastric leak in two of them. Two sham operated rats were died. The dead rats were cancelled from the study when body weight calculated. Average weight were 425gram and 420gram for SG and Sham respectively before surgery. Both groups experience some weight loss in the first week after surgery while the SG group start losing more weight , while the sham group are starting to maintain its normal weight until the end of the experiment.Conclusions:. Sleeve Gastrectomy as a bariatric procedure are successfully reduce the weight of DIO rats . Development of animal model for bariatric procedure is of great importance to test the effects of different bariatric procedures on the weight, and translate these procedures on human.
Photonic crystal fiber interferometers are widely used for sensing applications. In this work, solid core-Photonic crystal fiber based on Mach-Zehnder modal interferometer for sensing refractive index was presented. The general structure of sensor applied by splicing short lengths of PCF in both sides with conventional single mode fiber (SMF-28). To apply modal interferometer theory; collapsing technique based on fusion splicing used to excite higher order modes (LP01 and LP11). Laser diode (1550 nm) has been used as a pump light source. Where a high sensitive optical spectrum analyzer (OSA) was used to monitor and record the transmitted. The experimental work shows that the interference spectrum of Photonic crystal fiber interferometer
... Show MoreToday in the digital realm, where images constitute the massive resource of the social media base but unfortunately suffer from two issues of size and transmission, compression is the ideal solution. Pixel base techniques are one of the modern spatially optimized modeling techniques of deterministic and probabilistic bases that imply mean, index, and residual. This paper introduces adaptive pixel-based coding techniques for the probabilistic part of a lossy scheme by incorporating the MMSA of the C321 base along with the utilization of the deterministic part losslessly. The tested results achieved higher size reduction performance compared to the traditional pixel-based techniques and the standard JPEG by about 40% and 50%,
... Show MoreThis study employs wavelet transforms to address the issue of boundary effects. Additionally, it utilizes probit transform techniques, which are based on probit functions, to estimate the copula density function. This estimation is dependent on the empirical distribution function of the variables. The density is estimated within a transformed domain. Recent research indicates that the early implementations of this strategy may have been more efficient. Nevertheless, in this work, we implemented two novel methodologies utilizing probit transform and wavelet transform. We then proceeded to evaluate and contrast these methodologies using three specific criteria: root mean square error (RMSE), Akaike information criterion (AIC), and log
... Show MoreDigital tampering identification, which detects picture modification, is a significant area of image analysis studies. This area has grown with time with exceptional precision employing machine learning and deep learning-based strategies during the last five years. Synthesis and reinforcement-based learning techniques must now evolve to keep with the research. However, before doing any experimentation, a scientist must first comprehend the current state of the art in that domain. Diverse paths, associated outcomes, and analysis lay the groundwork for successful experimentation and superior results. Before starting with experiments, universal image forensics approaches must be thoroughly researched. As a result, this review of variou
... Show MoreProfessional learning societies (PLS) are a systematic method for improving teaching and learning performance through designing and building professional learning societies. This leads to overcoming a culture of isolation and fragmenting the work of educational supervisors. Many studies show that constructing and developing strong professional learning societies - focused on improving education, curriculum and evaluation will lead to increased cooperation and participation of educational supervisors and teachers, as well as increases the application of effective educational practices in the classroom.
The roles of the educational supervisor to ensure the best and optimal implementation and activation of professional learning soci
... Show MoreText categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy th
... Show MoreThis review delves deep into the intricate relationship between urban planning and flood risk management, tracing its historical trajectory and the evolution of methodologies over time. Traditionally, urban centers prioritized defensive measures, like dikes and levees, with an emphasis on immediate solutions over long-term resilience. These practices, though effective in the short term, often overlooked broader environmental implications and the necessity for holistic planning. However, as urban areas burgeoned and climate change introduced new challenges, there has been a marked shift in approach. Modern urban planning now emphasizes integrated blue-green infrastructure, aiming to harmonize human habitation with water cycles. Resil
... Show More