An Introduction to Probabilistic Generative Models

Preliminaries Probability Bayesian Formular Calculus Probabilistic Generative Models1 The generative model used for making decisions contains an inference step and a decision step: Inference step is to calculate \(\Pr(\mathcal{C}_k|\mathbf{x})\) which means the probability of \(\mathbf{x}\) belonging to the class \(\mathcal{C}_k\) given \(\mathbf{x}\) Decision step is to make a decision based on \(\Pr(\mathcal{C}_k|\mathbf{x})\) which was calculated in step 1 In this post, we just give an introduction and a framework for the probabilistic generative model in classification....

February 20, 2020 · (Last Modification: April 28, 2022) · Anthony Tan

Fisher Linear Discriminant(LDA)

Preliminaries linear algebra inner multiplication projection Idea of Fisher linear discriminant1 ‘Least-square method’ in classification can only deal with a small set of tasks. That is because it was designed for the regression task. Then we come to the famous Fisher linear discriminant. This method is also discriminative for it gives directly the class to which the input \(\mathbf{x}\) belongs. Assuming that the linear function \[ y=\mathbf{w}^T\mathbf{x}+w_0\tag{1} \]...

February 19, 2020 · (Last Modification: April 28, 2022) · Anthony Tan

Discriminant Functions and Decision Boundary

Preliminaries convex definition linear algebra vector length vector direction Discriminant Function in Classification The discriminant function or discriminant model is on the other side of the generative model. And we, here, have a look at the behavior of the discriminant function in linear classification.1 In the post ‘Least Squares Classification’, we have seen, in a linear classification task, the decision boundary is a line or hyperplane by which we separate two classes....

February 17, 2020 · (Last Modification: April 28, 2022) · Anthony Tan

Least Squares in Classification

Preliminaries A Simple Linear Regression Least Squares Estimation From Linear Regression to Linear Classification pseudo-inverse Least Squares for Classification1 Least-squares for linear regression had been talked about in ‘Simple Linear Regression’. And in this post, we want to find out whether this powerful algorithm can be used in classification. Recalling the distinction between the properties of classification and regression, two points need to be emphasized again(‘From Linear Regression to Linear Classification’):...

February 17, 2020 · (Last Modification: April 28, 2022) · Anthony Tan

From Linear Regression to Linear Classification

Preliminaries An Introduction to Linear Regression A Simple Linear Regression Bayesian theorem Feature extraction Recall Linear Regression The goal of a regression problem is to find out a function or hypothesis that given an input \(\mathbf{x}\), it can make a prediction \(\hat{y}\) to estimate the target. Both the target \(y\) and prediction \(\hat{y}\) here are continuous. They have the properties of numbers1: Consider 3 inputs \(\mathbf{x}_1\), \(\mathbf{x}_2\) and \(\mathbf{x}_3\) and their coresponding targets are \(y_1=0\), \(y_2=1\) and \(y_3=2\)....

February 17, 2020 · (Last Modification: April 28, 2022) · Anthony Tan

Polynomial Regression and Features-Extension of Linear Regression

Priliminaries A Simple Linear Regression Least Squares Estimation Extending Linear Regression with Features1 The original linear regression is in the form: \[ \begin{aligned} y(\mathbf{x})&= b + \mathbf{w}^T \mathbf{x}\\ &=w_01 + w_1x_1+ w_2x_2+\cdots + w_{m+1}x_{m+1} \end{aligned}\tag{1} \] where the input vector \(\mathbf{x}\) and parameter \(\mathbf{w}\) are \(m\)-dimension vectors whose first components are \(1\) and bias \(w_0=b\) respectively. This equation is linear for both the input vector and parameter vector. Then an idea come to us, if we set \(x_i=\phi_i(\mathbf{x})\) then equation (1) convert to:...

February 15, 2020 · (Last Modification: April 30, 2022) · Anthony Tan

Maximum Likelihood Estimation

Priliminaries A Simple Linear Regression Least Squares Estimation linear algebra Square Loss Function for Regression1 For any input \(\mathbf{x}\), our goal in a regression task is to give a prediction \(\hat{y}=f(\mathbf{x})\) to approximate target \(t\) where the function \(f(\cdot)\) is the chosen hypothesis or model as mentioned in the post https://anthony-tan.com/A-Simple-Linear-Regression/. The difference between \(t\) and \(\hat{y}\) can be called ‘error’ or more precisely ‘loss’. Because in an approximation task, ‘error’ occurs by chance and always exists, and ‘loss’ is a good word to represent the difference....

February 15, 2020 · (Last Modification: April 28, 2022) · Anthony Tan

Least Squares Estimation

Priliminaries A Simple Linear Regression the column space Another Example of Linear Regression 1 In the blog A Simple Linear Regression, squares of the difference between the output of a predictor and the target were used as a loss function in a regression problem. And it could be also written as: \[ \ell(\hat{\mathbf{y}}_i,\mathbf{y}_i)=(\hat{\mathbf{y}}_i-\mathbf{y}_i)^T(\hat{\mathbf{y}}_i-\mathbf{y}_i) \tag{1} \] The linear regression model in a matrix form is: \[ y=\mathbf{w}^T\mathbf{x}+\mathbf{b}\tag{2} \] What we do in this post is analyze the least-squares methods from two different viewpoints...

February 14, 2020 · (Last Modification: April 28, 2022) · Anthony Tan

A Simple Linear Regression

Preliminaries Linear Algebra(the concepts of space, vector) Calculus An Introduction to Linear Regression Notations of Linear Regression1 We have already created a simple linear model in the post “An Introduction to Linear Regression”. According to the definition of linearity, we can develop the simplest linear regression model: \[ Y\sim w_1X+w_0\tag{1} \] where the symbol \(\sim\) is read as “is approximately modeled as”. Equation (1) can also be described as “regressing \(Y\) on \(X\)(or \(Y\) onto \(X\))”....

October 11, 2019 · (Last Modification: August 4, 2022) · Anthony Tan

An Introduction to Linear Regression

Preliminariess Linear Algebra(the concepts of space, vector) Calculus What is Linear Regression Linear regression is a basic idea in statistical and machine learning based on the linear combination. And it was usually used to predict some responses to some inputs(predictors). Machine Learning and Statistical Learning Machine learning and statistical learning are similar but have some distinctions. In machine learning, models, regression models, or classification models, are used to predict the outputs of the new incoming inputs....

October 9, 2019 · (Last Modification: April 28, 2022) · Anthony Tan