Add Adagrad optimizer implementation in Pure Numpy#13681
Open
Adhithya-Laxman wants to merge 2 commits intoTheAlgorithms:masterfrom
Open
Add Adagrad optimizer implementation in Pure Numpy#13681Adhithya-Laxman wants to merge 2 commits intoTheAlgorithms:masterfrom
Adhithya-Laxman wants to merge 2 commits intoTheAlgorithms:masterfrom
Conversation
- Implements Adagrad (Adaptive Gradient) using pure NumPy - Adapts learning rate individually for each parameter - Includes comprehensive docstrings and type hints - Adds doctests for validation - Provides usage example demonstrating convergence - Follows PEP8 coding standards - Part of issue TheAlgorithms#13662
This was referenced Oct 23, 2025
Closed
Closed
Closed
Author
|
Hi! This PR seems ready for review/merge. Could someone take a look? Thanks! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This PR implements the Adagrad (Adaptive Gradient) optimizer using pure NumPy as part of the effort to add neural network optimizers to the repository.
This PR addresses part of issue #13662 - Add neural network optimizers module to enhance training capabilities
What does this PR do?
Implementation Details
Features
✅ Complete docstrings with parameter descriptions
✅ Type hints for all function parameters and return values
✅ Doctests for correctness validation
✅ Usage example demonstrating optimizer on quadratic function minimization
✅ PEP8 compliant code formatting
✅ Accumulated gradient tracking per parameter
✅ Numerical stability with epsilon parameter
Testing
All doctests pass:
Linting passes:
Example output demonstrates proper convergence behavior, with learning rates automatically adapting for each parameter.
References
Relation to Issue #13662
This PR is part of the planned optimizer sequence outlined in #13662:
Why Adagrad?
Adagrad is particularly useful for:
Checklist
Next Steps
Additional optimizers (NAG, Adam, Muon) will be submitted in follow-up PRs to maintain focused, reviewable contributions as outlined in issue #13662.
Related: Part of #13662