Semantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the power and popular tool for data and image processing in computer vision, used for many applications like “image recognition”, “object detection”, “semantic segmentation”, In this research paper, provide survey a background for many techniques designed to 3 Dimensions point cloud semantic segmentation in different domains on many several available free datasets and also making a comparison between these methods.
In data mining and machine learning methods, it is traditionally assumed that training data, test data, and the data that will be processed in the future, should have the same feature space distribution. This is a condition that will not happen in the real world. In order to overcome this challenge, domain adaptation-based methods are used. One of the existing challenges in domain adaptation-based methods is to select the most efficient features so that they can also show the most efficiency in the destination database. In this paper, a new feature selection method based on deep reinforcement learning is proposed. In the proposed method, in order to select the best and most appropriate features, the essential policies
... Show MoreEstimating the semantic similarity between short texts plays an increasingly prominent role in many fields related to text mining and natural language processing applications, especially with the large increase in the volume of textual data that is produced daily. Traditional approaches for calculating the degree of similarity between two texts, based on the words they share, do not perform well with short texts because two similar texts may be written in different terms by employing synonyms. As a result, short texts should be semantically compared. In this paper, a semantic similarity measurement method between texts is presented which combines knowledge-based and corpus-based semantic information to build a semantic network that repre
... Show MoreDuring COVID-19, wearing a mask was globally mandated in various workplaces, departments, and offices. New deep learning convolutional neural network (CNN) based classifications were proposed to increase the validation accuracy of face mask detection. This work introduces a face mask model that is able to recognize whether a person is wearing mask or not. The proposed model has two stages to detect and recognize the face mask; at the first stage, the Haar cascade detector is used to detect the face, while at the second stage, the proposed CNN model is used as a classification model that is built from scratch. The experiment was applied on masked faces (MAFA) dataset with images of 160x160 pixels size and RGB color. The model achieve
... Show MoreThe Internet of Things (IoT) technology and smart systems are playing a major role in the advanced developments in the world that take place nowadays, especially in multiple privilege systems. There are many smart systems used in daily human life to serve them and facilitate their tasks, such as alarm systems that work to prevent unwanted events or face detection and recognition systems. The main idea of this work is to capture live video using a connected Pi camera, save it, and unlock the electric strike door in several ways; either automatically by displaying a live video connected via USB webcam using a deep learning algorithm of facial recognition and OpenCV or by RFID technology, as well as by detecting abnormal entrance wit
... Show MoreDuring COVID-19, wearing a mask was globally mandated in various workplaces, departments, and offices. New deep learning convolutional neural network (CNN) based classifications were proposed to increase the validation accuracy of face mask detection. This work introduces a face mask model that is able to recognize whether a person is wearing mask or not. The proposed model has two stages to detect and recognize the face mask; at the first stage, the Haar cascade detector is used to detect the face, while at the second stage, the proposed CNN model is used as a classification model that is built from scratch. The experiment was applied on masked faces (MAFA) dataset with images of 160x160 pixels size and RGB color. The model achieve
... Show MoreClinical keratoconus (KCN) detection is a challenging and time-consuming task. In the diagnosis process, ophthalmologists must revise demographic and clinical ophthalmic examinations. The latter include slit-lamb, corneal topographic maps, and Pentacam indices (PI). We propose an Ensemble of Deep Transfer Learning (EDTL) based on corneal topographic maps. We consider four pretrained networks, SqueezeNet (SqN), AlexNet (AN), ShuffleNet (SfN), and MobileNet-v2 (MN), and fine-tune them on a dataset of KCN and normal cases, each including four topographic maps. We also consider a PI classifier. Then, our EDTL method combines the output probabilities of each of the five classifiers to obtain a decision b
Detection and classification of animals is a major challenge that is facing the researchers. There are five classes of vertebrate animals, namely the Mammals, Amphibians, Reptiles, Birds, and Fish, and each type includes many thousands of different animals. In this paper, we propose a new model based on the training of deep convolutional neural networks (CNN) to detect and classify two classes of vertebrate animals (Mammals and Reptiles). Deep CNNs are the state of the art in image recognition and are known for their high learning capacity, accuracy, and robustness to typical object recognition challenges. The dataset of this system contains 6000 images, including 4800 images for training. The proposed algorithm was tested by using 1200
... Show MoreRetinopathy of prematurity (ROP) can cause blindness in premature neonates. It is diagnosed when new blood vessels form abnormally in the retina. However, people at high risk of ROP might benefit significantly from early detection and treatment. Therefore, early diagnosis of ROP is vital in averting visual impairment. However, due to a lack of medical experience in detecting this condition, many people refuse treatment; this is especially troublesome given the rising cases of ROP. To deal with this problem, we trained three transfer learning models (VGG-19, ResNet-50, and EfficientNetB5) and a convolutional neural network (CNN) to identify the zones of ROP in preterm newborns. The dataset to train th
Cryptocurrency became an important participant on the financial market as it attracts large investments and interests. With this vibrant setting, the proposed cryptocurrency price prediction tool stands as a pivotal element providing direction to both enthusiasts and investors in a market that presents itself grounded on numerous complexities of digital currency. Employing feature selection enchantment and dynamic trio of ARIMA, LSTM, Linear Regression techniques the tool creates a mosaic for users to analyze data using artificial intelligence towards forecasts in real-time crypto universe. While users navigate the algorithmic labyrinth, they are offered a vast and glittering selection of high-quality cryptocurrencies to select. The
... Show MoreThe deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Conv
... Show More