Skip to content

Add lgamma and digamma ops#3181

Open
robert-johansson wants to merge 2 commits intoml-explore:mainfrom
robert-johansson:add-lgamma-digamma
Open

Add lgamma and digamma ops#3181
robert-johansson wants to merge 2 commits intoml-explore:mainfrom
robert-johansson:add-lgamma-digamma

Conversation

@robert-johansson
Copy link
Copy Markdown
Contributor

Summary

Closes #2050 — adds element-wise lgamma (log-gamma) and digamma (psi) functions as native unary operations.

  • Metal: Lanczos g=5 approximation for lgamma, asymptotic expansion with recurrence for digamma (new kernel lgamma.h)
  • CPU: std::lgamma for lgamma, custom asymptotic expansion for digamma
  • CUDA: built-in ::lgamma for lgamma, custom digamma
  • Autograd: grad(lgamma) returns digamma. Digamma itself throws on second-order differentiation (trigamma can be a follow-up).
  • Python bindings: mx.lgamma() and mx.digamma() with docstrings

These are needed by probabilistic programming frameworks (PyMC, scVI, GenMLX) that use log-gamma extensively in distribution log-prob computations.

Test plan

  • C++ tests: 20 new assertions covering accuracy, poles, reflection, integer promotion, vectorized, float16, and gradient correctness
  • Full test suite: 236/236 test cases, 3369/3369 assertions pass (zero regressions)
  • Python test cases for lgamma and digamma added
  • CUDA tests (no local hardware — relies on CI)

🤖 Generated with Claude Code

Robert Johansson and others added 2 commits February 27, 2026 16:02
Add element-wise lgamma (log-gamma) and digamma (psi) functions as
native unary operations across all backends.

Metal: Lanczos g=5 approximation for lgamma, asymptotic expansion
with recurrence for digamma. CPU: std::lgamma + custom digamma.
CUDA: built-in lgamma + custom digamma.

Full autograd support: grad(lgamma) returns digamma.

Closes ml-explore#2050

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Metal Shading Language has no call stack, so the recursive call to
lgamma_impl(1-x) in the reflection formula was silently miscompiled,
dropping the log(pi) - log(|sin(pi*x)|) terms entirely.

Fix: compute the reflection terms first, transform x -> 1-x, then
run Lanczos once on the transformed value. No function calls needed.

Verified: lgamma(0.001) now returns 6.907 (was -0.000578).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Request of implantmentation of lgamma function

1 participant