neupy.algorithms.gd.rprop module

class neupy.algorithms.gd.rprop.RPROP[source]

Resilient backpropagation (RPROP) is an optimization algorithm for supervised learning.

RPROP algorithm takes into account only direction of the gradient and completely ignores its magnitude. Every weight values has a unique step size associated with it (by default all of the are equal to step).

The rule is following, when gradient direction changes (sign of the gradient) we decrease step size for specific weight multiplying it by decrease_factor and if sign stays the same than we increase step size for this specific weight multiplying it by increase_factor.

The step size is always bounded by minstep and maxstep.

Parameters:
minstep : float

Minimum possible value for step. Defaults to 0.001.

maxstep : float

Maximum possible value for step. Defaults to 10.

increase_factor : float

Increase factor for step in case when gradient doesn’t change sign compare to previous epoch.

decrease_factor : float

Decrease factor for step in case when gradient changes sign compare to previous epoch.

network : list, tuple or LayerConnection instance

Network’s architecture. There are a few ways to define it.

  • List of layers. For instance, [Input(2), Tanh(4), Relu(1)].
  • Constructed layers. For instance, Input(2) >> Tanh(4) >> Relu(1).
regularizer : function or None

Network’s regularizer.

loss : str or function

Error/loss function. Defaults to mse.

  • mae - Mean Absolute Error.
  • mse - Mean Squared Error.
  • rmse - Root Mean Squared Error.
  • msle - Mean Squared Logarithmic Error.
  • rmsle - Root Mean Squared Logarithmic Error.
  • categorical_crossentropy - Categorical cross entropy.
  • binary_crossentropy - Binary cross entropy.
  • binary_hinge - Binary hinge entropy.
  • categorical_hinge - Categorical hinge entropy.
  • Custom function which accepts two mandatory arguments. The first one is expected value and the second one is predicted value. Example:
def custom_func(expected, predicted):
    return expected - predicted
step : float, Variable

Learning rate, defaults to 0.1.

show_epoch : int

This property controls how often the network will display information about training. It has to be defined as positive integer. For instance, number 100 mean that network shows summary at 1st, 100th, 200th, 300th … and last epochs.

Defaults to 1.

shuffle_data : bool

If it’s True than training data will be shuffled before the training. Defaults to True.

signals : dict, list or function

Function that will be triggered after certain events during the training.

verbose : bool

Property controls verbose output in terminal. The True value enables informative output in the terminal and False - disable it. Defaults to False.

See also

IRPROPPlus
iRPROP+ algorithm.
GradientDescent
GradientDescent algorithm.

Notes

Algorithm doesn’t work with mini-batches.

Examples

>>> import numpy as np
>>> from neupy import algorithms
>>> from neupy.layers import *
>>>
>>> x_train = np.array([[1, 2], [3, 4]])
>>> y_train = np.array([[1], [0]])
>>>
>>> network = Input(2) >> Sigmoid(3) >> Sigmoid(1)
>>> optimizer = algorithms.RPROP(network)
>>> optimizer.train(x_train, y_train)
Attributes:
errors : list

Information about errors. It has two main attributes, namely train and valid. These attributes provide access to the training and validation errors respectively.

last_epoch : int

Value equals to the last trained epoch. After initialization it is equal to 0.

n_updates_made : int

Number of training updates applied to the network.

Methods

predict(X) Predicts output for the specified input.
train(X_train, y_train, X_test=None, y_test=None, epochs=100) Train network. You can control network’s training procedure with epochs parameter. The X_test and y_test should be presented both in case network’s validation required after each training epoch.
fit(*args, **kwargs) Alias to the train method.
decrease_factor = None[source]
increase_factor = None[source]
init_train_updates()[source]
maxstep = None[source]
minstep = None[source]
options = {'decrease_factor': Option(class_name='RPROP', value=ProperFractionProperty(name="decrease_factor")), 'increase_factor': Option(class_name='RPROP', value=BoundedProperty(name="increase_factor")), 'loss': Option(class_name='BaseOptimizer', value=FunctionWithOptionsProperty(name="loss")), 'maxstep': Option(class_name='RPROP', value=BoundedProperty(name="maxstep")), 'minstep': Option(class_name='RPROP', value=BoundedProperty(name="minstep")), 'regularizer': Option(class_name='BaseOptimizer', value=Property(name="regularizer")), 'show_epoch': Option(class_name='BaseNetwork', value=IntProperty(name="show_epoch")), 'shuffle_data': Option(class_name='BaseNetwork', value=Property(name="shuffle_data")), 'signals': Option(class_name='BaseNetwork', value=Property(name="signals")), 'step': Option(class_name='BaseOptimizer', value=ScalarVariableProperty(name="step")), 'target': Option(class_name='BaseOptimizer', value=Property(name="target")), 'verbose': Option(class_name='Verbose', value=VerboseProperty(name="verbose"))}[source]
update_prev_delta(prev_delta)[source]
class neupy.algorithms.gd.rprop.IRPROPPlus[source]

iRPROP+ is an optimization algorithm for supervised learning. This is a variation of the RPROP algorithm.

Parameters:
minstep : float

Minimum possible value for step. Defaults to 0.001.

maxstep : float

Maximum possible value for step. Defaults to 10.

increase_factor : float

Increase factor for step in case when gradient doesn’t change sign compare to previous epoch.

decrease_factor : float

Decrease factor for step in case when gradient changes sign compare to previous epoch.

regularizer : function or None

Network’s regularizer.

network : list, tuple or LayerConnection instance

Network’s architecture. There are a few ways to define it.

  • List of layers. For instance, [Input(2), Tanh(4), Relu(1)].
  • Constructed layers. For instance, Input(2) >> Tanh(4) >> Relu(1).
loss : str or function

Error/loss function. Defaults to mse.

  • mae - Mean Absolute Error.
  • mse - Mean Squared Error.
  • rmse - Root Mean Squared Error.
  • msle - Mean Squared Logarithmic Error.
  • rmsle - Root Mean Squared Logarithmic Error.
  • categorical_crossentropy - Categorical cross entropy.
  • binary_crossentropy - Binary cross entropy.
  • binary_hinge - Binary hinge entropy.
  • categorical_hinge - Categorical hinge entropy.
  • Custom function which accepts two mandatory arguments. The first one is expected value and the second one is predicted value. Example:
def custom_func(expected, predicted):
    return expected - predicted
show_epoch : int

This property controls how often the network will display information about training. It has to be defined as positive integer. For instance, number 100 mean that network shows summary at 1st, 100th, 200th, 300th … and last epochs.

Defaults to 1.

shuffle_data : bool

If it’s True than training data will be shuffled before the training. Defaults to True.

signals : dict, list or function

Function that will be triggered after certain events during the training.

verbose : bool

Property controls verbose output in terminal. The True value enables informative output in the terminal and False - disable it. Defaults to False.

See also

RPROP
RPROP algorithm.
GradientDescent
GradientDescent algorithm.

Notes

Algorithm doesn’t work with mini-batches.

References

[1] Christian Igel, Michael Huesken (2000)
Improving the Rprop Learning Algorithm

Examples

>>> import numpy as np
>>> from neupy import algorithms
>>> from neupy.layers import *
>>>
>>> x_train = np.array([[1, 2], [3, 4]])
>>> y_train = np.array([[1], [0]])
>>>
>>> network = Input(2) >> Sigmoid(3) >> Sigmoid(1)
>>> optimizer = algorithms.IRPROPPlus(network)
>>> optimizer.train(x_train, y_train)

Methods

predict(X) Predicts output for the specified input.
train(X_train, y_train, X_test=None, y_test=None, epochs=100) Train network. You can control network’s training procedure with epochs parameter. The X_test and y_test should be presented both in case network’s validation required after each training epoch.
fit(*args, **kwargs) Alias to the train method.
init_functions()[source]
one_training_update(X_train, y_train)[source]

Function would be trigger before run all training procedure related to the current epoch.

Parameters:
epoch : int

Current epoch number.

options = {'decrease_factor': Option(class_name='RPROP', value=ProperFractionProperty(name="decrease_factor")), 'increase_factor': Option(class_name='RPROP', value=BoundedProperty(name="increase_factor")), 'loss': Option(class_name='BaseOptimizer', value=FunctionWithOptionsProperty(name="loss")), 'maxstep': Option(class_name='RPROP', value=BoundedProperty(name="maxstep")), 'minstep': Option(class_name='RPROP', value=BoundedProperty(name="minstep")), 'regularizer': Option(class_name='BaseOptimizer', value=Property(name="regularizer")), 'show_epoch': Option(class_name='BaseNetwork', value=IntProperty(name="show_epoch")), 'shuffle_data': Option(class_name='BaseNetwork', value=Property(name="shuffle_data")), 'signals': Option(class_name='BaseNetwork', value=Property(name="signals")), 'step': Option(class_name='BaseOptimizer', value=ScalarVariableProperty(name="step")), 'target': Option(class_name='BaseOptimizer', value=Property(name="target")), 'verbose': Option(class_name='Verbose', value=VerboseProperty(name="verbose"))}[source]
update_prev_delta(prev_delta)[source]