The physical substance at high energy level with specific circumstances; tend to behave harsh and complicated, meanwhile, sustaining equilibrium or non-equilibrium thermodynamic of the system. Measurement of the temperature by ordinary techniques in these cases is not applicable at all. Likewise, there is a need to apply mathematical models in numerous critical applications to measure the temperature accurately at an atomic level of the matter. Those mathematical models follow statistical rules with different distribution approaches of quantities energy of the system. However, these approaches have functional effects at microscopic and macroscopic levels of that system. Therefore, this research study represents an innovative of a wireless temperature sensor, which utilizes proton resonance frequency of carbon-13 isotope material. In addition to that, this study also addresses the energy distribution of the particles by selecting an updated appropriate approach that has interesting points of limitation in the number of degree of freedom: (1) thermodynamically limits and (2) theoretical statistical thermodynamics observations. Lastly, the main idea of this paper is to visualize the analysis of temperate in the nanoscale system via statistical thermodynamics approach along with the material characterization of carbon-13 isotope.
Tremendous efforts have been exerted to understand first language acquisition to facilitate second language learning. The problem lies in the difficulty of mastering English language and adapting a theory that helps in overcoming the difficulties facing students. This study aims to apply Thomasello's theory of language mastery through usage. It assumes that adults can learn faster than children and can learn the language separately, and far from academic education. Tomasello (2003) studied the stages of language acquisition for children, and developed his theory accordingly. Some studies, such as: (Ghalebi and Sadighi, 2015, Arvidsson, 2019; Munoz, 2019; Verspoor and Hong, 2013) used this theory when examining language acquisition. Thus,
... Show MoreSemantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
The futuristic age requires progress in handwork or even sub-machine dependency and Brain-Computer Interface (BCI) provides the necessary BCI procession. As the article suggests, it is a pathway between the signals created by a human brain thinking and the computer, which can translate the signal transmitted into action. BCI-processed brain activity is typically measured using EEG. Throughout this article, further intend to provide an available and up-to-date review of EEG-based BCI, concentrating on its technical aspects. In specific, we present several essential neuroscience backgrounds that describe well how to build an EEG-based BCI, including evaluating which signal processing, software, and hardware techniques to use. Individu
... Show MoreMetaheuristics under the swarm intelligence (SI) class have proven to be efficient and have become popular methods for solving different optimization problems. Based on the usage of memory, metaheuristics can be classified into algorithms with memory and without memory (memory-less). The absence of memory in some metaheuristics will lead to the loss of the information gained in previous iterations. The metaheuristics tend to divert from promising areas of solutions search spaces which will lead to non-optimal solutions. This paper aims to review memory usage and its effect on the performance of the main SI-based metaheuristics. Investigation has been performed on SI metaheuristics, memory usage and memory-less metaheuristics, memory char
... Show Moreنتيجة للتطورات الأخيرة في أبحاث الطرق السريعة بالإضافة إلى زيادة استخدام المركبات، كان هناك اهتمام كبير بنظام النقل الذكي الأكثر حداثة وفعالية ودقة (ITS) في مجال رؤية الكمبيوتر أو معالجة الصور الرقمية، يلعب تحديد كائنات معينة في صورة دورًا مهمًا في إنشاء صورة شاملة. هناك تحدٍ مرتبط بالتعرف على لوحة ترخيص السيارة (VLPR) بسبب الاختلاف في وجهة النظر، والتنسيقات المتعددة، وظروف الإضاءة غير الموحدة في وقت الحصول
... Show MoreNowadays, people's expression on the Internet is no longer limited to text, especially with the rise of the short video boom, leading to the emergence of a large number of modal data such as text, pictures, audio, and video. Compared to single mode data ,the multi-modal data always contains massive information. The mining process of multi-modal information can help computers to better understand human emotional characteristics. However, because the multi-modal data show obvious dynamic time series features, it is necessary to solve the dynamic correlation problem within a single mode and between different modes in the same application scene during the fusion process. To solve this problem, in this paper, a feature extraction framework of
... Show More