Experiments for "Entropy Based Regularization Improves Performance in Forward-Forward Algorithm"
-
Updated
Nov 3, 2025 - Jupyter Notebook
Experiments for "Entropy Based Regularization Improves Performance in Forward-Forward Algorithm"
Unified and customizable implementation of Direct Feedback Alignment (DFA), Direct Random Target Projection (DRTP) and Local Loss (LL) algorithms in Keras/TF2
A memory-efficient, gradient-free zeroth-order (derivative-free) optimizer designed to solve the "Curse of Dimensionality" in Black-Box optimization and memory-constrained Machine Learning. It provides an O(log D) gradient estimation approach that can successfully train Neural Networks without ever calculating analytical derivatives or Backprop
Add a description, image, and links to the backpropagation-alternative topic page so that developers can more easily learn about it.
To associate your repository with the backpropagation-alternative topic, visit your repo's landing page and select "manage topics."