-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logp inference fails for exponentiated distributions #7717
Comments
This is not a mixture (mass vs density don't go well together), not sure why you mentioned it. Re bounds, They are always a bit tricky specially when you chain multiple transformations. However I thought we were returning ninf for this case. Is it also nan for negative numbers? |
That's the mathematical form of any zero-inflated model, why wouldn't I mention it? Negative numbers return -inf, so it's specifically only for zero. |
zero inflated model is not a mixture it's two likelihoods. You always know which component a value belongs to add the expression doesn't involve weighting the components (you can't weigh the density and the pmf, the pmf has infinity density at the support point I guess). Re zero, transforms right now rely on jacobian (just a heck) to try to enforce the domain. When it returns nan it's considered outside of the donation. I guess the log transform jacobian doesn't return Nan for zero, but ninf? |
Bruh. Where should I be looking to answer the question that actually matters? Here I don't see any kind of special logic. |
Those are hurdle models and rely on truncating the continuous component so it excludes zero. It's a hack to recycle mixture functionality, we should implement something specifically for it. Requiring the truncation adds significant cost to the logp. Re zero, should we consider logp zero when log det of jacobian returns -inf as well, now we just consider nan |
Sorry all you showed are discrete which is fine, those are mixtures. But we have hurdle classes that seems to be what you are looking for. Those are not mixtures but we use mixture with the truncation trick under the hood |
Here is the line I'm talking about https://github.com/pymc-devs/pymc/blob/main/pymc%2Flogprob%2Ftransforms.py#L227 |
Description
Suppose I want to make a log-normal "by hand":
This should return
-inf
, since 0 is outside the support of the (exponential) distribution. The same is true for any other support-constraining transformation.Non-trivial use-case would be in a mixture model with components LogNormal and DiracDelta(0). This works fine, but if I want to explore a more fat-tailed distribution for the nonzero component (like LogStudentT), it fails.
The text was updated successfully, but these errors were encountered: