Parsing and display of math equations is included in this blog template. Parsing of math is enabled by remark-math and rehype-katex.
KaTeX and its associated font is included in _document.js so feel free to use it on any page. 1
Inline math symbols can be included by enclosing the term between the $ symbol.
Math code blocks are denoted by $$.
If you intend to use the $ sign instead of math, you can escape it (\$), or specify the HTML entity ($) 2
Inline or manually enumerated footnotes are also supported. Click on the links above to see them in action.
Deriving the OLS Estimator
Using matrix notation, let n denote the number of observations and k denote the number of regressors.
The vector of outcome variables Y is a n×1 matrix,
Y=y1...yn
The matrix of regressors X is a n×k matrix (or each row is a k×1 vector),
At times it might be easier to use vector notation. For consistency, I will use the bold small x to denote a vector and capital letters to denote a matrix. Single observations are denoted by the subscript.
Least Squares
Start:
yi=xi′β+ui
Assumptions:
Linearity (given above)
E(U∣X)=0 (conditional independence)
rank(X) = k (no multi-collinearity i.e. full rank)
Var(U∣X)=σ2In (Homoskedascity)
Aim:
Find β that minimises the sum of squared errors:
Q=i=1∑nui2=i=1∑n(yi−xi′β)2=(Y−Xβ)′(Y−Xβ)
Solution:
Hints: Q is a 1×1 scalar, by symmetry ∂b∂b′Ab=2Ab.