Andrew Ng’s course on Machine Learning at Coursera provides an excellent explanation of gradient descent for linear regression. To really get a strong grasp on it, I decided to work through some of the derivations and some simple examples here.
Having learned about the application of RBF Networks to classification tasks, I’ve also been digging in to the topics of regression and function approximation using RBFNs. I came across a very helpful blog post by Youngmok Yun on the topic of Gaussian Kernel Regression.
My education in the fundamentals of machine learning has mainly come from Andrew Ng’s excellent Coursera course on the topic. One thing that wasn’t covered in that course, though, was the topic of “boosting” which I’ve come across in a number of different contexts now. Fortunately, it’s a relatively straightforward topic if you’re already familiar with machine learning classification.