posteriors is functional first and aims to be easy to use and extend. Let's try it out by training a simple model with variational inference: Observe

Search code, repositories, users, issues, pull requests...

submited by
Style Pass
2024-04-17 03:00:02

posteriors is functional first and aims to be easy to use and extend. Let's try it out by training a simple model with variational inference:

Observe that posteriors recommends specifying log_posterior and temperature such that log_posterior remains on the same scale for different batch sizes. posteriors algorithms are designed to be stable as temperature goes to zero.

Further, the output of log_posterior is a tuple containing the evaluation (single-element Tensor) and an additional argument (TensorTree) containing any auxiliary information we'd like to retain from the model call, here the model predictions. If you have no auxiliary information, you can simply return torch.tensor([]) as the second element. For more info see torch.func.grad (with has_aux=True) or the documentation.

posteriors is designed to be easily extensible, if you're favorite method is not listed above, raise an issue and we'll see what we can do!

Leave a Comment