General Background: Deep image matting is a fundamental task in computer vision, enabling precise foreground extraction from complex backgrounds, with applications in augmented reality, computer graphics, and video processing. Specific Background: Despite advancements in deep learning-based methods, preserving fine details such as hair and transparency remains a challenge. Knowledge Gap: Existing approaches struggle with accuracy and efficiency, necessitating novel techniques to enhance matting precision. Aims: This study integrates deep learning with fusion techniques to improve alpha matte estimation, proposing a lightweight U-Net model incorporating color-space fusion and preprocessing. Results: Experiments using the AdobeComposition-1k dataset demonstrate superior performance compared to traditional methods, achieving higher accuracy, faster processing speed, and improved boundary preservation. Novelty: The proposed model effectively combines deep learning with fusion techniques, enhancing matting quality while maintaining robustness across various environmental conditions. Implications: These findings highlight the potential of integrating fusion techniques with deep learning for image matting, offering valuable insights for future research in automated image processing applications, including augmented reality, gaming, and interactive video technologies. Highlights: Better Precision: Fusion techniques enhance fine detail preservation. Faster Processing: Lightweight U-Net improves speed and accuracy. Wide Applications: Useful for AR, gaming, and video processing. Keywords: Deep image matting, computer vision, deep learning, fusion techniques, U-Net