-
Notifications
You must be signed in to change notification settings - Fork 18.7k
Solver Prototxt
The solver.prototxt is a configuration file used to tell caffe how you want the network trained.
This parameter indicates the base (beginning) learning rate of the network. The value is a real number (floating point).
This parameter indicates how the learning rate should change over time. This value is a quoted string.
Options include:
- "step" - drop the learning rate in step sizes indicated by the gamma parameter.
- "multistep" - drop the learning rate in step size indicated by the gamma at each specified stepvalue.
- "fixed" - the learning rate does not change.
- "exp" - base_lr * gamma^iter
- "poly" - the effective learning rate follows a polynomial decay, to be zero by the max_iter.
base_lr * (1 - iter/max_iter) ^ (power) - "sigmoid" - the effective learning rate follows a sigmod decay.
base_lr * ( 1/(1 exp(-gamma * (iter - stepsize))))
where base_lr, max_iter, gamma, step, stepvalue and power are defined in the solver parameter protocol buffer, and iter is the current iteration.
This parameter indicates how much the learning rate should change every time we reach the next "step." The value is a real number, and can be thought of as multiplying the current learning rate by said number to gain a new learning rate.
This parameter indicates how often (at some iteration count) that we should move onto the next "step" of training. This value is a positive integer.
This parameter indicates one of potentially many iteration counts that we should move onto the next "step" of training. This value is a positive integer. There are often more than one of these parameters present, each one indicated the next step iteration.
This parameter indicates when the network should stop training. The value is an integer indicate which iteration should be the last.
This parameter indicates how much of the previous weight will be retained in the new calculation. This value is a real fraction.
This parameter indicates the factor of (regularization) penalization of large weights. This value is a often a real fraction.
A random seed used by the solver and the network (for example, in dropout layer).
This parameter indicates which mode will be used in solving the network.
Options include:
- CPU
- GPU
This parameter indicates how often caffe should output a model and solverstate. This value is a positive integer.
This parameter indicates how a snapshot output's model and solverstate's name should be prefixed. This value is a double quoted string.
This parameter indicates the location of the network to be trained (path to prototxt). This value is a double quoted string.
Accumulate gradients across batches through the iter_size solver field. With this setting batch_size: 16 with iter_size: 1 and batch_size: 4 with iter_size: 4 are equivalent.
This parameter indicates how many test iterations should occur per test_interval. This value is a positive integer.
This parameter indicates how often the test phase of the network will be executed.
This parameter indicates how often caffe should output results to the screen. This value is a positive integer and specifies an iteration count.
This parameter indicates the back propagation algorithm used to train the network. This value is a quoted string.
Options include:
- Stochastic Gradient Descent "SGD"
- AdaDelta "AdaDelta"
- Adaptive Gradient "AdaGrad"
- Adam "Adam"
- Nesterov’s Accelerated Gradient "Nesterov"
- RMSprop "RMSProp"