neupy.algorithms.step_decay

neupy.algorithms.step_decay(initial_value, reduction_freq, start_iter=0, name='step')[source]

Algorithm minimizes learning step monotonically after each iteration.

\[\alpha_{t + 1} = \frac{\alpha_{0}}{1 + \frac{t}{m}}\]

where \(\alpha\) is a step, \(t\) is an iteration number and \(m\) is a reduction_freq parameter.

step = initial_value / (1 + current_iteration / reduction_freq)
Parameters:
initial_value : float

Initial value for the learning rate. It’s the learning rate returned during the first iteration.

reduction_freq : int

Parameter controls step reduction frequency. The larger the value the slower step parameter decreases.

For instance, if reduction_freq=100 and step=0.12 then after 100 iterations step is going to be equal to 0.06 (which is 0.12 / 2), after 200 iterations step is going to be equal to 0.04 (which is 0.12 / 3) and so on.

start_iter : int

Start iteration. At has to be equal to 0 when network just started the training. Defaults to 0.

name : str

Learning rate’s variable name. Defaults to step.

Notes

Step will be reduced faster when you have smaller training batches.

Examples

>>> from neupy import algorithms
>>> from neupy.layers import *
>>>
>>> optimizer = algorithms.Momentum(
...     Input(5) >> Relu(10) >> Sigmoid(1),
...     step=algorithms.step_decay(
...         initial_value=0.1,
...         reduction_freq=100,
...     )
... )