This work examines novel conjugate gradient methods. To address unconstrained optimization problems and optimize the training of neural networks. The methodologies employ advanced derivative-based techniques to enhance optimization outcomes. The novel approach employs any line search to guarantee adequate descent. Moreover, we prove that, given specific assumptions, our proposed method converges universally. Experimental evidence has demonstrated that our proposed approach is superior in terms of efficiency and robustness compared to traditional conjugate gradient approaches for training neural networks and solving unconstrained optimization issues.