Newton's Method

Preliminaries ‘steepest descent algorithm’ Linear Algebra Calculus 1,2 Newton’s Method1 Taylor series gives us the conditions for minimum points based on both first-order items and the second-order item. And first-order item approximation of a performance index function produced a powerful algorithm for locating the minimum points which we call ‘steepest descent algorithm’. Now we want to have an insight into the second-order approximation of a function to find out whether there is an algorithm that can also work as a guide to the minimum points....

December 21, 2019 · (Last Modification: May 3, 2022) · Anthony Tan

Quadratic Functions

Preliminaries Linear algebra Calculus 1,2 Taylor series Quadratic Functions1 Quadratic function, a type of performance index, is universal. One of its key properties is that it can be represented in a second-order Taylor series precisely. \[ F(\mathbf{x})=\frac{1}{2}\mathbf{x}^TA\mathbf{x}+\mathbf{d}\mathbf{x}+c\tag{1} \] where \(A\) is a symmetric matrix(if it is not symmetric, it can be easily converted into symmetric). And recall the property of gradient: \[ \nabla (\mathbf{h}^T\mathbf{x})=\nabla (\mathbf{x}^T\mathbf{h})=\mathbf{h}\tag{2} \] and \[ \nabla (\mathbf{x}^TQ\mathbf{x})=Q\mathbf{x}+Q^T\mathbf{x}=2Q\mathbf{x}\tag{3} \]...

December 19, 2019 · (Last Modification: May 1, 2022) · Anthony Tan

Performance Surfaces and Optimum Points

Preliminaries Perceptron learning algorithm Hebbian learning algorithm Linear algebra Neural Network Training Technique1 Several architectures of the neural networks had been introduced. And each neural network had its own learning rule, like, the perceptron learning algorithm, and the Hebbian learning algorithm. When more and more neural network architectures were designed, some general training methods were necessary. Up to now, we can classify all training rules in three categories in a general way:...

December 19, 2019 · (Last Modification: May 1, 2022) · Anthony Tan