In this research, deep neural networks (CNNs) were used to estimate missing values in a univariate time series dataset and compared with DES/Holt and (SVR) estimation methods to determine their accuracy in handling missing values in the dataset. These missing values directly affect the process of building mathematical and statistical models, affecting the accuracy of the final results that enable us to make the right decisions in the future. The presence of missing values in the dataset is due to problems that occur during sampling, such as a malfunction in the measuring device, security attacks, or communication errors. The results indicated that the CNN model outperformed other methods by using a simulation approach that generated data randomly. Three different sample sizes (60, 100, 300) were selected with missing data percentages of (10%, 15%, 20%) for each, meeting the "missing at random" (MAR) condition. The Box-Jenkins MA (1) model was applied once with β = 0.5 and again with β = 0.9. The accuracy of the estimation methods was evaluated using the accuracy criteria mean sum of squared error (MSE) and root mean squared error (RMSE).