Definition

  1. A decision rule \(\delta_1\) is at least as good as \(\delta_2\) if \(R(\theta,\delta_1)\le R(\theta,\delta_2)\) for all \(\theta\).

  2. A decision rule \(\delta_1\) is better than \(\delta_2\) if \(R(\theta,\delta_1)\le R(\theta,\delta_2)\) for all \(\theta\), with strict inequality for some \(\theta\in\Theta\).

  3. A decision rule \(\delta_1\) is risk equivalent to \(\delta_2\) if \(R(\theta,\delta_1)= R(\theta,\delta_2)\) for all \(\theta\).



Definition

A decision rule \(\delta_0\) is Admissible if there does not exist any decision rule \(\delta\) s.t. \(R(\theta,\delta)\le R(\theta,\delta_0)\) for all \(\theta\in \Theta\) with strict inequality for some \(\theta\in\Theta\), i.e., there does not exist any rule better than \(\delta_0\).



Remark

\(\delta_0\) admissible does not mean that \(\delta_0\) dominates every decision rule \(\delta\). What it mean is, \(\delta_0\) is NOT dominated by any decision rule \(\delta\).



Definition

  1. A class \(C(\subset D^*)\) of decision rules is complete if given any \(\delta\in D^*\) and \(\delta\notin C\), there exists a rule \(\delta_0\in C\) which is better than such \(\delta\).

  2. A class \(C(\subset D^*)\) of decision rules is essentially complete if given any \(\delta\in D^*\) and \(\delta\notin C\), there exists a rule \(\delta_0\in C\) which is at least as good as such \(\delta\).



Lemma

If \(C\) is a complete class, and \(A\) denote the class of admissible rules, then \(A\subset C\)




Lemma

If \(C\) is an essentially complete class, and there exists an admissible \(\delta\notin C\), then \(\exists \delta'\in C\) which is risk equivalent to \(\delta\).




Theorem (Finite case)

Assume that \(\Theta=\{\theta_1,\ldots,\theta_k\}\), and that a Bayes rule \(\delta_\xi\) w.r.t a prior \(\xi=\{\xi_1,\ldots,\xi_k\}\) exists, where \(\xi_i\) is the prior probability assined to \(\theta_i\). If \(\xi_i>0\) for all \(1\le i\le k\), then \(\delta_\xi\) is admissible.




Definition (Generalized Bayes estimator)

A rule \(\delta_0\) is Generalized Bayes w.r.t a prior (proper or improper) \(\xi\) if for every \(x\in X\), \(\int_\Theta L(\theta, \delta(x))P(\theta|x)d\theta\) takes on a finite minimum value when \(\delta=\delta_0\).




예제

\(X_1,\ldots X_n\stackrel{\text{iid}}{\sim}B(1,\theta)\)(베르누이)라고 가정하자. 또한 improper prior \(\xi\) on \((0,1)\)을 pdf \(g(\theta)=\frac{1}{\theta(1-\theta)},\theta\in (0,1)\)이라고 하자. 그렇다면 \[ P(\theta|x)=\theta^{\sum x_i-1}(1-\theta)^{n-\sum x_i-1} \] 이다. 이 때 Generalized Bayes estimator(posterior mean)은 \(a=\bar x\)이다.



예제

Suppose \(X_1,\ldots,X_n\stackrel{\text{iid}}\sim N(\theta,\sigma^2)\) where both \(\theta\in \mathbb{R}\) and \(\sigma^2>0\) are known. Assuming squared error loss, we want to prove admissibility of \(\bar X\).



back