A decision rule \(\delta_1\) is at least as good as \(\delta_2\) if \(R(\theta,\delta_1)\le R(\theta,\delta_2)\) for all \(\theta\).
A decision rule \(\delta_1\) is better than \(\delta_2\) if \(R(\theta,\delta_1)\le R(\theta,\delta_2)\) for all \(\theta\), with strict inequality for some \(\theta\in\Theta\).
A decision rule \(\delta_1\) is risk equivalent to \(\delta_2\) if \(R(\theta,\delta_1)= R(\theta,\delta_2)\) for all \(\theta\).
A decision rule \(\delta_0\) is Admissible if there does not exist any decision rule \(\delta\) s.t. \(R(\theta,\delta)\le R(\theta,\delta_0)\) for all \(\theta\in \Theta\) with strict inequality for some \(\theta\in\Theta\), i.e., there does not exist any rule better than \(\delta_0\).
\(\delta_0\) admissible does not mean that \(\delta_0\) dominates every decision rule \(\delta\). What it mean is, \(\delta_0\) is NOT dominated by any decision rule \(\delta\).
A class \(C(\subset D^*)\) of decision rules is complete if given any \(\delta\in D^*\) and \(\delta\notin C\), there exists a rule \(\delta_0\in C\) which is better than such \(\delta\).
A class \(C(\subset D^*)\) of decision rules is essentially complete if given any \(\delta\in D^*\) and \(\delta\notin C\), there exists a rule \(\delta_0\in C\) which is at least as good as such \(\delta\).
better than은 for some \(\theta\)에 대해서 strict inequality를 가진다
즉 class \(C(\subset D^*)\)안에 \(C\) 밖에 있는 \(\delta\)들보다 better than 한 \(\delta_0\)가 존재할 때 complete, at least as good as한 \(delta_0\)가 존재할 때 essentially complete하다.
If \(C\) is a complete class, and \(A\) denote the class of admissible rules, then \(A\subset C\)
If \(C\) is an essentially complete class, and there exists an admissible \(\delta\notin C\), then \(\exists \delta'\in C\) which is risk equivalent to \(\delta\).
\(C\)가 essentially complete하다는 건 \(\exists\delta\in C\) s.t. \(\delta\) is at least as good as any \(\delta'\in C^c\)라는 의미이고,
admissible \(\delta\notin C\)의 의미는 \(\nexists\delta\) s.t. \(R(\theta,\delta)\le R(\theta,\delta_0)\) for all \(\theta\in \Theta\)이기 때문에
두 statement의 교집합은 \(\exists \delta'\in C, \delta\in C\) which are risk equivalent이다.
Assume that \(\Theta=\{\theta_1,\ldots,\theta_k\}\), and that a Bayes rule \(\delta_\xi\) w.r.t a prior \(\xi=\{\xi_1,\ldots,\xi_k\}\) exists, where \(\xi_i\) is the prior probability assined to \(\theta_i\). If \(\xi_i>0\) for all \(1\le i\le k\), then \(\delta_\xi\) is admissible.
증명: \(\delta_\xi\)가 admissible하지 않다고 하자. 그렇다면 모든 \(\theta\in\Theta\)에 대해 \(R(\theta,\delta)\le R(\theta,\delta_\xi)\), and strict inequality holds for some \(\theta\in\Theta\)를 만족하는 \(delta\)가 존재한다. 그러므로 \[ r(\xi,\delta)=\sum_{i=1}^kR(\theta_i,\delta)\xi_i<\sum_{i=1}^kR(\theta_i,\delta_\xi)\xi_i=r(\xi,\delta_\xi)\mbox{ }\mbox{ }\mbox{ }\mbox{ (contradiction)} \] 따라서 \(\delta_\xi\)는 admissible하다.
만약 \(\xi_i>0\) for all \(1\le i\le k\)조건이 없다면 성립하지 않는다. strict inequality가 성립하는 \(i\)에 probability를 0을 주면 되기 때문이다.
A rule \(\delta_0\) is Generalized Bayes w.r.t a prior (proper or improper) \(\xi\) if for every \(x\in X\), \(\int_\Theta L(\theta, \delta(x))P(\theta|x)d\theta\) takes on a finite minimum value when \(\delta=\delta_0\).
\(X_1,\ldots X_n\stackrel{\text{iid}}{\sim}B(1,\theta)\)(베르누이)라고 가정하자. 또한 improper prior \(\xi\) on \((0,1)\)을 pdf \(g(\theta)=\frac{1}{\theta(1-\theta)},\theta\in (0,1)\)이라고 하자. 그렇다면 \[ P(\theta|x)=\theta^{\sum x_i-1}(1-\theta)^{n-\sum x_i-1} \] 이다. 이 때 Generalized Bayes estimator(posterior mean)은 \(a=\bar x\)이다.
Suppose \(X_1,\ldots,X_n\stackrel{\text{iid}}\sim N(\theta,\sigma^2)\) where both \(\theta\in \mathbb{R}\) and \(\sigma^2>0\) are known. Assuming squared error loss, we want to prove admissibility of \(\bar X\).