Priliminaries A Simple Linear Regression Least Squares Estimation linear algebra Square Loss Function for Regression1 For any input \(\mathbf{x}\), our goal in a regression task is to give a prediction \(\hat{y}=f(\mathbf{x})\) to approximate target \(t\) where the function \(f(\cdot)\) is the chosen hypothesis or model as mentioned in the post https://anthony-tan.com/A-Simple-Linear-Regression/.
The difference between \(t\) and \(\hat{y}\) can be called ‘error’ or more precisely ‘loss’. Because in an approximation task, ‘error’ occurs by chance and always exists, and ‘loss’ is a good word to represent the difference....
Priliminaries A Simple Linear Regression the column space Another Example of Linear Regression 1 In the blog A Simple Linear Regression, squares of the difference between the output of a predictor and the target were used as a loss function in a regression problem. And it could be also written as:
\[ \ell(\hat{\mathbf{y}}_i,\mathbf{y}_i)=(\hat{\mathbf{y}}_i-\mathbf{y}_i)^T(\hat{\mathbf{y}}_i-\mathbf{y}_i) \tag{1} \]
The linear regression model in a matrix form is:
\[ y=\mathbf{w}^T\mathbf{x}+\mathbf{b}\tag{2} \]
What we do in this post is analyze the least-squares methods from two different viewpoints...
Preliminaries Linear Algebra(the concepts of space, vector) Calculus An Introduction to Linear Regression Notations of Linear Regression1 We have already created a simple linear model in the post “An Introduction to Linear Regression”. According to the definition of linearity, we can develop the simplest linear regression model:
\[ Y\sim w_1X+w_0\tag{1} \]
where the symbol \(\sim\) is read as “is approximately modeled as”. Equation (1) can also be described as “regressing \(Y\) on \(X\)(or \(Y\) onto \(X\))”....