This post contains my notes on the Autoencoder section of Stanford’s deep learning tutorial / CS294A. It also contains my notes on the sparse autoencoder exercise, which was easily the most challenging piece of Matlab code I’ve ever written!!!
Stanford has a very nice tutorial on Deep Learning that I’ve read through, and I’m in the process of going through it in more detail and completing the exercises. I’ll be posting my notes on each section as I go.
Andrew Ng’s course on Machine Learning at Coursera provides an excellent explanation of gradient descent for linear regression. To really get a strong grasp on it, I decided to work through some of the derivations and some simple examples here.
Having learned about the application of RBF Networks to classification tasks, I’ve also been digging in to the topics of regression and function approximation using RBFNs. I came across a very helpful blog post by Youngmok Yun on the topic of Gaussian Kernel Regression.