neupy.algorithms.polynomial_decay
- neupy.algorithms.polynomial_decay(initial_value, decay_iter, minstep=0.001, power=1.0, cycle=False, start_iter=0, name='step')[source]
Applies polynomial decay to the learning rate. This function is a wrapper for the tensorflow’s polynomial_decay function.
iteration = min(current_iteration, decay_iter) step = minstep + ( (initial_value - minstep) * (1 - iteration / decay_iter) ^ power )
If cycle is True then a multiple of decay_iter is used, the first one that is bigger than current_iterations.
decay_iter = decay_iter * ceil(current_iteration / decay_iter) step = minstep + ( (initial_value - minstep) * (1 - current_iteration / decay_iter) ^ power )
Parameters: - initial_value : float
Initial value for the learning rate.
- decay_iter : int
When cycle=False parameter identifies number of iterations when minstep will be reached. When cycle=True than the decay_iter value will be increased. See code above.
- minstep : float
Step will never be lower than that minimum possible step, specified by this parameter. Defaults to 0.001.
- power : float
The power of the polynomial. Defaults to 1.
- cycle : bool
When value equal to True than step will be further reduced when current_iteration > decay_iter. Defaults to False.
- start_iter : int
Start iteration. At has to be equal to 0 when network just started the training. Defaults to 0.
- name : str
Learning rate’s variable name. Defaults to step.
Notes
Step will be reduced faster when you have smaller training batches.
Examples
>>> from neupy import algorithms >>> from neupy.layers import * >>> >>> optimizer = algorithms.Momentum( ... Input(5) >> Relu(10) >> Sigmoid(1), ... step=algorithms.polynomial_decay( ... initial_value=0.1, ... decay_iter=1000, ... minstep=0.01, ... ) ... )