Melanoma, a highly malignant form of skin cancer, affects individuals of all genders and is associated with high mortality rates, especially in advanced stages. The use of tele-dermatology has emerged as a proficient diagnostic approach for skin lesions and is particularly beneficial in rural areas with limited access to dermatologists. However, accurately, and efficiently segmenting melanoma remains a challenging task due to the significant diversity observed in the morphology, pigmentation, and dimensions of cutaneous nevi. To address this challenge, we propose a novel approach called DenseUNet-169 with a dilated convolution encoder-decoder for automatic segmentation of RGB dermascopic images. By incorporating dilated convolution, our model improves the receptive field of the kernels without increasing the number of parameters. Additionally, we used a method called Copy and Concatenation Attention Block (CCAB) for robust feature computation. To evaluate the performance of our proposed framework, we utilized the International Skin Imaging Collaboration (ISIC) 2017 dataset. The experimental results demonstrate the reliability and effectiveness of our suggested approach compared to existing methodologies. Our framework achieved a high level of accuracy (98.38%), precision (96.07%), recall (94.32%), dice score (95.07%), and Jaccard score (90.45%), outperforming current techniques.
Software testing is a vital part of the software development life cycle. In many cases, the system under test has more than one input making the testing efforts for every exhaustive combination impossible (i.e. the time of execution of the test case can be outrageously long). Combinatorial testing offers an alternative to exhaustive testing via considering the interaction of input values for every t-way combination between parameters. Combinatorial testing can be divided into three types which are uniform strength interaction, variable strength interaction and input-output based relation (IOR). IOR combinatorial testing only tests for the important combinations selected by the tester. Most of the researches in combinatorial testing appli
... Show MoreWith the rapid development of computers and network technologies, the security of information in the internet becomes compromise and many threats may affect the integrity of such information. Many researches are focused theirs works on providing solution to this threat. Machine learning and data mining are widely used in anomaly-detection schemes to decide whether or not a malicious activity is taking place on a network. In this paper a hierarchical classification for anomaly based intrusion detection system is proposed. Two levels of features selection and classification are used. In the first level, the global feature vector for detection the basic attacks (DoS, U2R, R2L and Probe) is selected. In the second level, four local feature vect
... Show MoreDrilling deviated wells is a frequently used approach in the oil and gas industry to increase the productivity of wells in reservoirs with a small thickness. Drilling these wells has been a challenge due to the low rate of penetration (ROP) and severe wellbore instability issues. The objective of this research is to reach a better drilling performance by reducing drilling time and increasing wellbore stability.
In this work, the first step was to develop a model that predicts the ROP for deviated wells by applying Artificial Neural Networks (ANNs). In the modeling, azimuth (AZI) and inclination (INC) of the wellbore trajectory, controllable drilling parameters, unconfined compressive strength (UCS), formation
... Show MoreTwo simple methods for the determination of eugenol were developed. The first depends on the oxidative coupling of eugenol with p-amino-N,N-dimethylaniline (PADA) in the presence of K3[Fe(CN)6]. A linear regression calibration plot for eugenol was constructed at 600 nm, within a concentration range of 0.25-2.50 μg.mL–1 and a correlation coefficient (r) value of 0.9988. The limits of detection (LOD) and quantitation (LOQ) were 0.086 and 0.284 μg.mL–1, respectively. The second method is based on the dispersive liquid-liquid microextraction of the derivatized oxidative coupling product of eugenol with PADA. Under the optimized extraction procedure, the extracted colored product was determined spectrophotometrically at 618 nm. A l
... Show MoreThe most influential theory of ‘Politeness’ was formulated in 1978 and revised in 1987 by Brown and Levinson. ‘Politeness’, which represents the interlocutors’ desire to be pleasant to each other through a positive manner of addressing, was claimed to be a universal phenomenon. The gist of the theory is the intention to mitigate ‘Face’ threats carried by certain ‘Face’ threatening acts towards others.
‘Politeness Theory’ is based on the concept that interlocutors have ‘Face’ (i.e., self and public – image) which they consciously project, try to protect and to preserve. The theory holds that various politeness strategies are used to prot
... Show MoreThis article studies a comprehensive methods of edge detection and algorithms in digital images which is reflected a basic process in the field of image processing and analysis. The purpose of edge detection technique is discovering the borders that distinct diverse areas of an image, which donates to refining the understanding of the image contents and extracting structural information. The article starts by clarifying the idea of an edge and its importance in image analysis and studying the most noticeable edge detection methods utilized in this field, (e.g. Sobel, Prewitt, and Canny filters), besides other schemes based on distinguishing unexpected modifications in light intensity and color gradation. The research as well discuss
... Show MoreThe demand for electronic -passport photo ( frontal facial) images has grown rapidly. It now extends to Electronic Government (E-Gov) applications such as social benefits driver's license, e-passport, and e-visa . With the COVID 19 (coronavirus disease ), facial (formal) images are becoming more widely used and spreading quickly, and are being used to verify an individual's identity, but unfortunately that comes with insignificant details of constant background which leads to huge byte consumption that affects storage space and transmission, where the optimal solution that aims to curtail data size using compression techniques that based on exploiting image redundancy(s) efficiently.