Bayesian inference is a popular tool for parameter estimation. However, the posterior distribution might not be sufficient for decision-making. Bayesian Amortized Decision-Making is a method that learns the cost of data and action pairs to make Bayes-optimal decisions.Simulation-BasedInference (SBI) is apowerful tool for estimating the posterior distribution $p(\theta \mid x)$ overthe parameters $\theta$ of a simulator, given observed data $x$. However, thegoal is sometimes not the posterior itself but but making a decision in adownstream task based on the inferred posterior distribution. Often, thesedecisions are associated with a cost, which one wishes to minimize. This is whereBayesian decision-making comes into play, aiming to choose actions that minimizethe expected cost under uncertain conditions.Approximate Bayesian Decision MakingGiven an observation $x_o$ and a posterior $p(\theta \mid x)$, Bayesian decisionmaking provides the action with the lowest cost, averaged over thedistribution of parameters.$$a = \underset{a \in \mathcal{A}}{\operatorname{arg\ min}} \int c(\theta, a) p(\theta \mid xo) d\theta$$The function $c(\theta, a)$ quantifies the cost of taking an action $a$ if thetrue parameters $\theta$ of the system were known but is flexible enough toallow for different cost structures as well. The true posterior $p(\theta \midx)$ is usually not known and is approximated using SBI, usually by a conditionaldensity estimator $q{\phi}(\theta \mid x)$. The quality of the decision hingesthen on the accuracy of this posterior approximation.Figure 1. [Gor23A], Figure 1. Illustration of the difference betweenthe proposed method for amortized Bayesian decision making and numericalapproximation of the integral over the cost function, weighted by the estimatedposterior. Each marker describes the averaged cost difference for tenobservations.To address this challenge, [Gor23A] introduce Bayesian Amortized Decision Making (BAM) inthe context of SBI. Within the same setting as neural posterior estimation(NPE), BAM learns the cost of data and action pairs. Instead of averaging thecost over the posterior, the proposed method requires only one forward-passthrough the network. Therefore, BAM performs amortized Bayesian decision-making.BAM aims to estimate the expected cost $\mathbb{E}_{p(\theta | xo)}[c(\theta,a)]$ under the true posterior. This is achieved by samplingfrom the joint distribution $(\theta, x) \sim p(\theta, x)$ and an actiondistribution $a \sim p(a)$. A feedforward neural network $f{\omega}(x,a)$ is then used to regressthe cost of parameter sets, utilizing a Mean-Squared-Error loss.$$\mathcal{L}(\omega) = \mathbb{E}{p(\theta, a)p(a)} \left[ \left(f{\omega}(x,a) - c(\theta, a)\right)^2 \right]$$To define the cost function, the authors assign zero cost where $a =\theta{\text{true}}$ and increase the cost the more $a$ deviates from$\theta{\text{true}}$. The exact manner in which the deviation is penalizeddepends on the task at hand. In real-world scenarios, the cost function couldalso include the economic cost of the action. Taking epidemiology as an example,the cost function could include a cost for vaccination, quarantine, or even alockdown.The authors prove that BAM accurately yields the expected cost and framesBayesian decision making as a regression task, offering an efficient alternativeto weighted parameter selection. In contrast to a Monte-Carlo approximation ofthe integral (NPE-MC) with samples from the (approximate) posterior$$\mathbb{E}_{p(\theta | xo)}[c(\theta, a)] \approx \frac{1}{N} \sum{i=1}^{N} c(\thetai, a).$$BAM directly learns the expected cost $f{\omega}(x,a)$. It thereby circumventsthe need to learn the full posterior distribution and repeatedly evaluating thecost function $c(\theta, a)$.Numerical ExperimentsFigure 2. [Gor23A], Figure 3. Comparison of the proposed BAM method andthe Monte-Carlo based approximation of the expected cost against the incurredcost using the true posterior of each task.To illustrate the effectiveness and limitations of BAM, the authors compare itwith a Monte-Carlo-based approach (NPE-MC) across various benchmark taskstypical to SBI, such as the Lotka-Volterra and SIR models. Additionally, theypresent an application to a real-world scenario in medical neuroscience. Thecomparison on the SBI tasks (Figure 2), based on six differentsimulation budgets, reveals that the Monte-Carlo variant necessitates a largersample size to achieve a quality of solution comparable to BAM, suggestingsignificant savings in simulation resources when only Bayes-optimal solutionsare sought.