Since more than a decade, human rights dialogue in the European Mediterranean Region has been marked by a number of tensions. Although a number of factors contribute to such disputes, the effect of human rights conditionality, which ties EU economic cooperation progression with partner countries human rights advancement, on the dialogue has not been studied. Understanding the aspects, impacts, and effects of conditionality on Euro-Med relations is crucial for furthering dialogue. Yet this variable has been almost entirely neglected in academic and policy research. The research concludes several direct and indirect impacts of conditionality on human rights dialogue using a mixed methodology approach. Direct effects are reflected in the widespread rejection of the language of conditionality used by EU institutions, exposing EU's normative identity to intense scrutiny from its southern neighbors. Indirect effects include skepticism and perceptions that the EU politicizes human rights for its own benefit.
Classification of imbalanced data is an important issue. Many algorithms have been developed for classification, such as Back Propagation (BP) neural networks, decision tree, Bayesian networks etc., and have been used repeatedly in many fields. These algorithms speak of the problem of imbalanced data, where there are situations that belong to more classes than others. Imbalanced data result in poor performance and bias to a class without other classes. In this paper, we proposed three techniques based on the Over-Sampling (O.S.) technique for processing imbalanced dataset and redistributing it and converting it into balanced dataset. These techniques are (Improved Synthetic Minority Over-Sampling Technique (Improved SMOTE), Border
... Show MoreText categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy th
... Show MoreThe presence of different noise sources and continuous increase in crosstalk in the deep submicrometer technology raised concerns for on-chip communication reliability, leading to the incorporation of crosstalk avoidance techniques in error control coding schemes. This brief proposes joint crosstalk avoidance with adaptive error control scheme to reduce the power consumption by providing appropriate communication resiliency based on runtime noise level. By switching between shielding and duplication as the crosstalk avoidance technique and between hybrid automatic repeat request and forward error correction as the error control policies, three modes of error resiliencies are provided. The results show that, in reduced mode, the scheme achie
... Show MoreIn the field of data security, the critical challenge of preserving sensitive information during its transmission through public channels takes centre stage. Steganography, a method employed to conceal data within various carrier objects such as text, can be proposed to address these security challenges. Text, owing to its extensive usage and constrained bandwidth, stands out as an optimal medium for this purpose. Despite the richness of the Arabic language in its linguistic features, only a small number of studies have explored Arabic text steganography. Arabic text, characterized by its distinctive script and linguistic features, has gained notable attention as a promising domain for steganographic ventures. Arabic text steganography harn
... Show MoreThe research tackles the potential challenged faced the translator when dealing with the literal translation of nowadays political terms in media. Despite the universal complexity of translating political jargon, adopting literal translation introduces an added layer of intricacy. The primary aim of literal translation is to maintain faithfulness to the original text, irrespective of whether it is in English or Arabic. However, this method presents several challenges within the linguistic and cultural dimensions. Drawing upon scholarly sources, this article expounds upon the multifaceted issues that emerge from the verbatim translation of political terms from English into Arabic. These problems include political culture, language differenc
... Show MoreRecent population studies have shown that placenta accreta spectrum (PAS) disorders remain undiagnosed before delivery in half to two-thirds of cases. In a series from specialist diagnostic units in the USA, around one-third of cases of PAS disorders were not diagnosed during pregnancy. Maternal
This review delves deep into the intricate relationship between urban planning and flood risk management, tracing its historical trajectory and the evolution of methodologies over time. Traditionally, urban centers prioritized defensive measures, like dikes and levees, with an emphasis on immediate solutions over long-term resilience. These practices, though effective in the short term, often overlooked broader environmental implications and the necessity for holistic planning. However, as urban areas burgeoned and climate change introduced new challenges, there has been a marked shift in approach. Modern urban planning now emphasizes integrated blue-green infrastructure, aiming to harmonize human habitation with water cycles. Resil
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show More