لقد كان للثورة الرقمية التي ظهرت في القرن العشرين أثر في إحداث تأثيرات جذرية تضمنت نواحي الحياة المختلفة، خصوصًا في المجال الإقتصادي، والتي تمثلت بثلاث صور ( الذكاء الإصطناعيArtificial Intelligence( AI) وإنترنت الأشياء Internet of Things والبيانات الضخمة Big Data ، وفيما يتعلق بالذكاء الإصطناعي، فقد تم إكتشافهُ في منتصف خمسينات القرن الماضي الذي تعد الولادة الحقيقية لهُ في المؤتمر الذي نُظم في الولايات المتحدة الأمريكية على يد العالمان John McCarthyو Marvin Minsky ، وعلى مرًّ السنين تطورت تقنيات الذكاء الإصطناعي بشكل متسارع الى أن وصلت بعض التطبيقات أن تكون لها القدرة على التعلم الذاتي من المواقف التي تمرًّ بها، فتتصرف بأستقلالية وفقًا للظروف والمواقف المحيطة بها، كالطأئرات المسيَّرة ذاتيًا والسيارت ذاتية القيادة والروبوتات وغيرها، وعلى الرغم من الفوائد التي لاتُعّد ولا تُحصى للذكاء الإصطناعي وتطبيقاته في المجالات الطبية والعسكرية والتعليمة وغيرها، إلا أن لهذه التطبيقات أثر سلبي على الإنسان فقد نتج عن استخدامها المساس بالمصالح المحمية قانونًا، لذا يؤدي ظهورها ، التفكير بشكل جديَّ حول التأثيرات المستجدة التي ستُحدثها هذه التقنية الثورية بما تملكه من إمكانيات متطورة ومقدرة على التصرف بشكل ذاتي ودون الحاجة لأي تدخل بشري ، لذا تحتم ﻋﻠﻰ اﻟﻤﺸﺮع إﻋﺎدة ﺗﻜﯿﯿﻒ ﻗﻮاﻋﺪه اﻟﻘﺎﻧﻮﻧﯿﺔ ذات اﻟﻤﺪﻟﻮل اﻟﻮاﻗﻌﻲ واﻟﻤﺎدي من أجل التعامل ﻣﻊ واﻗﻊ إﻓﺘﺮاﺿﻲ ﻏﯿﺮ ﻣﻠﻤﻮس ﻓﻲ حالات متعددة تحديدًا ﻣﻊ مرحلة إﻧﺘﻘﺎل ﻓﻜﺮة اﻟﺬﻛﺎء الإﺻﻄﻨﺎﻋﻲ ﻣﻦ الإطار اﻟﻤﻌﻨﻮي ﻏﯿﺮ اﻟﻤﻠﻤﻮس واﻟﺨﺎص، إﻟﻰ الإطار اﻟﻤﺎدي اﻟﻤﺤﺴﻮس واﻟﻌﺎم، وﻣﻦ إطﺎر اﻟﺒﺮﻣﺠﯿﺎت سهلة التحكم إﻟﻰ ﻧﻈﺎم اﻟﺒﺮﻣﺠﯿﺎت اﻟﺬﻛﯿﺔ، ﺳﻮاء تعلق هذا الأمر بتطور قدرات البشر أو تطوير تطبيقات الذكاء الإصطناعي من الناحية الفيزيائية أو المادية بصورة تحاكي البشر في تصرفاتهم وأفعالهم لذا الحاجة تستدعي التأطير القانوني للقواعد التي تحكم هذا الذكاء وتحديد المسؤولية المدنية والجنائية بصورتي العمد والخطأ الناجمة عن كل أخلال يصيب المصالح المحمية. Abstract The digital revolution that emerged in the twentieth century had a radical impact on various aspects of life, especially in the economic industry, which included three forms (Artificial Intelligence (AI), the Internet of Things, and Big Data. With regard to artificial intelligence, it was discovered in the mid-fifties of the last century, and its real birth was at the conference organized in the United States of America by the scientists John McCarthy and Marvin Minsky. Over the years, artificial intelligence techniques have developed rapidly until some applications have reached the ability to self-learn from the situations that they encounter and act independently according to the circumstances and situations surrounding it, such as drones, driverless cars, robots, etc., and despite the countless benefits of artificial intelligence and its applications in the medical, military, educational, and other fields, these applications have a negative impact on humans, which may result in using it to harm legally protected interests. Therefore, the emergence of artificial intelligence applications leads to serious thinking about the new effects that this revolutionary technology will have with its advanced capabilities and the ability to act independently and without the need for any intervention .Therefore, it is necessary for the legislator to readapt its legal rules with a realistic and material meaning in order to deal with a hypothetical, intangible occurrence occurs in several cases, specifically with the transition of the idea of artificial intelligence from the established framework from the intangible and private intention, to the concrete, tangible and public framework, and from the easy-to-control software framework to the intelligent software system, whether this matter relates to the development of human capabilities or the development of artificial intelligence applications from a physical or physical perspective in a way that mimics humans in their behavior and actions. Therefore, the need demands for legal framing of the rules that govern this intelligence and determining civil and criminal liability weather it is intentionality and unintentionally that resulting from every breach of the protected interest
Coagulation is the most important process in drinking water treatment. Alum coagulant increases the aluminum residuals, which have been linked in many studies to Alzheimer's disease. Therefore, it is very important to use it with the very optimal dose. In this paper, four sets of experiments were done to determine the relationship between raw water characteristics: turbidity, pH, alkalinity, temperature, and optimum doses of alum [ .14 O] to form a mathematical equation that could replace the need for jar test experiments. The experiments were performed under different conditions and under different seasonal circumstances. The optimal dose in every set was determined, and used to build a gene expression model (GEP). The models were co
... Show MoreCurrently, one of the topical areas of application of machine learning methods is the prediction of material characteristics. The aim of this work is to develop machine learning models for determining the rheological properties of polymers from experimental stress relaxation curves. The paper presents an overview of the main directions of metaheuristic approaches (local search, evolutionary algorithms) to solving combinatorial optimization problems. Metaheuristic algorithms for solving some important combinatorial optimization problems are described, with special emphasis on the construction of decision trees. A comparative analysis of algorithms for solving the regression problem in CatBoost Regressor has been carried out. The object of
... Show MoreThe study using Nonparametric methods for roubust to estimate a location and scatter it is depending minimum covariance determinant of multivariate regression model , due to the presence of outliear values and increase the sample size and presence of more than after the model regression multivariate therefore be difficult to find a median location .
It has been the use of genetic algorithm Fast – MCD – Nested Extension and compared with neural Network Back Propagation of multilayer in terms of accuracy of the results and speed in finding median location ,while the best sample to be determined by relying on less distance (Mahalanobis distance)has the stu
... Show MoreArtificial Neural networks (ANN) are powerful and effective tools in time-series applications. The first aim of this paper is to diagnose better and more efficient ANN models (Back Propagation, Radial Basis Function Neural networks (RBF), and Recurrent neural networks) in solving the linear and nonlinear time-series behavior. The second aim is dealing with finding accurate estimators as the convergence sometimes is stack in the local minima. It is one of the problems that can bias the test of the robustness of the ANN in time series forecasting. To determine the best or the optimal ANN models, forecast Skill (SS) employed to measure the efficiency of the performance of ANN models. The mean square error and
... Show MoreIn this paper, previous studies about Fuzzy regression had been presented. The fuzzy regression is a generalization of the traditional regression model that formulates a fuzzy environment's relationship to independent and dependent variables. All this can be introduced by non-parametric model, as well as a semi-parametric model. Moreover, results obtained from the previous studies and their conclusions were put forward in this context. So, we suggest a novel method of estimation via new weights instead of the old weights and introduce
Paper Type: Review article.
another suggestion based on artificial neural networks.
With its rapid spread, the coronavirus infection shocked the world and had a huge effect on billions of peoples' lives. The problem is to find a safe method to diagnose the infections with fewer casualties. It has been shown that X-Ray images are an important method for the identification, quantification, and monitoring of diseases. Deep learning algorithms can be utilized to help analyze potentially huge numbers of X-Ray examinations. This research conducted a retrospective multi-test analysis system to detect suspicious COVID-19 performance, and use of chest X-Ray features to assess the progress of the illness in each patient, resulting in a "corona score." where the results were satisfactory compared to the benchmarked techniques. T
... Show MoreIn information security, fingerprint verification is one of the most common recent approaches for verifying human identity through a distinctive pattern. The verification process works by comparing a pair of fingerprint templates and identifying the similarity/matching among them. Several research studies have utilized different techniques for the matching process such as fuzzy vault and image filtering approaches. Yet, these approaches are still suffering from the imprecise articulation of the biometrics’ interesting patterns. The emergence of deep learning architectures such as the Convolutional Neural Network (CNN) has been extensively used for image processing and object detection tasks and showed an outstanding performance compare
... Show MoreSteganography is a technique of concealing secret data within other quotidian files of the same or different types. Hiding data has been essential to digital information security. This work aims to design a stego method that can effectively hide a message inside the images of the video file. In this work, a video steganography model has been proposed through training a model to hiding video (or images) within another video using convolutional neural networks (CNN). By using a CNN in this approach, two main goals can be achieved for any steganographic methods which are, increasing security (hardness to observed and broken by used steganalysis program), this was achieved in this work as the weights and architecture are randomized. Thus,
... Show MoreDisease diagnosis with computer-aided methods has been extensively studied and applied in diagnosing and monitoring of several chronic diseases. Early detection and risk assessment of breast diseases based on clinical data is helpful for doctors to make early diagnosis and monitor the disease progression. The purpose of this study is to exploit the Convolutional Neural Network (CNN) in discriminating breast MRI scans into pathological and healthy. In this study, a fully automated and efficient deep features extraction algorithm that exploits the spatial information obtained from both T2W-TSE and STIR MRI sequences to discriminate between pathological and healthy breast MRI scans. The breast MRI scans are preprocessed prior to the feature
... Show More
Harold Pinter often portrays the dilemma of obliterated figures that are incapable of feeling of their own existences. These figures feel exhausted and frustrated in a world that deprives them their humanity. They retreat into a limited world where they look for security and protection. The characters' feeling of security is threatened by outside forces represented by intruding persons who stand for the mysterious powers that are indefinable. The conflict between these intruders and the characters finally ends with the characters’ defeat. The reason for the intruders' attack on the victims remains ambiguous and is not explained. The element of mystery pervades Pinter's plays and represents one of
... Show More