Remark

The basic idea of the consistency is that if \(\hat Q_n (\theta)\) converges in probability to \(Q_0(\theta)\) for every \(\theta\) and \(Q_0(\theta)\) is maximized at the true parameter \(\theta_0\), then the limit of the maximum \(\hat\theta\) should be the maximum of \(\theta_0\) of the limit under conditions for interchanging the maximization and limiting operations.




Definition (Uniform Convergence in Probability)

\(\hat Q_n(\theta)\) converges uniformly in probability to \(Q_0(\theta)\) means \(\sup_{\theta\in\Theta} |\hat Q_n(\theta)-Q_0(\theta)|\stackrel{p}\rightarrow 0\).



이 정리에서 아래의 4가지 조건이 특히 중요하다

Theorem

If there is a function \(Q_0(\theta)\) such that

  1. \(Q_0(\theta)\) is uniquely maximized at \(\theta_0\),

  2. \(\Theta\) is compact,

  3. \(Q_0(\theta)\) is continuous,

  4. \(\hat Q_n(\theta)\) converges uniformly in probability to \(Q_0(\theta)\),

then \(\hat\theta \stackrel{P}\rightarrow \theta_0\).





Remark

Extremum estimator의 consistency를 얻기 위해서는 \(Q_0(\theta)\)를 찾아야 한다. 보통 \(Q_0(\theta)\)를 계산해내는 것은 매우 straightforward하다. \(Q_0(\theta)\)\(\hat Q_n(\theta)\)의 probability limit for any \(\theta\)이고 이는 the law of large numbers (WLLN - independent variable cases, SLLN - i.i.d variable cases) 에 의해 쉽게 계산된다. 예를 들어, MLE의 경우 (1번예제) the law of large numbers implies that for MLE, the limit of \(\hat Q_n(\theta)\) is \(Q_0(\theta)=E[\log f(Z;\theta)]\), and nonlinear least squares 의 경우 (2번예제), \(Q_0(\theta)=-E(y-h(X,\theta))\)이다. 3,4번 예제 (GMM, CMD)에 대해서는 앞으로 다룰 것이다.



back