This might be a more neurally plausible model of biological systems. I do not know of any real biological systems that implement error backpropagation, the dominant algorithm in present day large language models and other machine learning approaches.
Perhaps this is because artificial neural networks and AI took off from machine learning which in turn evolved from mathematical conception of minimizing loss functions. Specifically, loss functions generally focus on comparing predicted output with target output and then feeding this difference regressively rather than progressively. It makes sense from a practical and analytical point of view if evident speed of learning is the priority since the output layer determines output to be changed most immediately.
However, if we consider our muscles and how they are regulated by the neuromuscular synapse, it is difficult to apply how error is backpropogated there. The proposal is that error is only in sensors and this is forward propagated. Moreover, error in and of itself is meaningless to the system unless we assume that the system has a self state that is in non-equilibrium with the environment. It stands to reason that therefore forward propagated sensory errors must be integrated in an autoencoding operation to promote error signals consistent with the self and suppress those that are not. Thus, Hebbian learning remains the fundamental, as is observed in neurobiological findings.