Skip to content

danluu/UFLDL-tutorial

Repository files navigation

These are solutions to the exercises up at the Stanford OpenClassroom Deep Learning class and Andrew Ng's UFLDL Tutorial. When I was solving these, I looked around for copies of the solutions so I could compare notes because debugging learning algorithms is often tedious in a way that isn't educational, but almost everything I found was incomplete or obviously wrong. I don't promise that these don't have bugs, but they at least give outputs within the range of the expected outputs for the assignments.

Apologies for the mess. I'll clean this up when I have some spare time. Pull requests welcome, of course.

I've attempted to make this Octave compatible, so that you can run this with free software. I've done this through the self-taught learning exercise, and it seems to work, but the results are slightly different. One side effect of this is that I'm using fminlbfgs instead of minFunc

Here's the order of the exercises:

  1. linear.m
  2. multiple.m
  3. logistic.m
  1. Sparse Autoencoder: sparseae_exercise/train.m

  2. Vectorized Implementation: sparseae_exercise/train.m (1 is already vectorized)

3.1. PCA in 2d: pca_2d/pca_2d.m

3.2. PCA: pca_gen/pca_gen.m

  1. Softmax Regression: softmax_exercise/softmaxExercise.m

  2. Self-Taught Learning: stl_exercise/stlExercise.m

  3. Building Deep Networks for Classification: stackedae_exercise/stackedAEExercise.m

  4. Learning Color Features with Sparse Autoencoders: linear_decoder_exercise/linearDecoderExercise.m

  5. Convolution and Pooling: cnn_exercise/cnnExercise.m

About

Deep Learning and Unsupervised Feature Learning Tutorial Solutions

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •