neupy.algorithms.gd.errors module

neupy.algorithms.gd.errors.mse(expected, predicted)[source]

Mean squared error.

\[mse(t, o) = mean((t - o) ^ 2)\]

where \(t=expected\) and \(o=predicted\)

Parameters:

expected : array-like, theano variable

predicted : array-like, theano variable

Returns:

array-like, theano variable

neupy.algorithms.gd.errors.rmse(expected, predicted)[source]

Root mean squared error.

\[rmse(t, o) = \sqrt{mean((t - o) ^ 2)} = \sqrt{mse(t, 0)}\]

where \(t=expected\) and \(o=predicted\)

Parameters:

expected : array-like, theano variable

predicted : array-like, theano variable

Returns:

array-like, theano variable

neupy.algorithms.gd.errors.mae(expected, predicted)[source]

Mean absolute error.

\[mae(t, o) = mean(\left| t - o \right|)\]

where \(t=expected\) and \(o=predicted\)

Parameters:

expected : array-like, theano variable

predicted : array-like, theano variable

Returns:

array-like, theano variable

neupy.algorithms.gd.errors.msle(expected, predicted)[source]

Mean squared logarithmic error.

\[msle(t, o) = mean((\log(t + 1) - \log(o + 1)) ^ 2)\]

where \(t=expected\) and \(o=predicted\)

Parameters:

expected : array-like, theano variable

predicted : array-like, theano variable

Returns:

array-like, theano variable

neupy.algorithms.gd.errors.rmsle(expected, predicted)[source]

Root mean squared logarithmic error.

\[rmsle(t, o) = \sqrt{ mean((\log(t + 1) - \log(o + 1)) ^ 2) } = \sqrt{msle(t, o)}\]

where \(t=expected\) and \(o=predicted\)

Parameters:

expected : array-like, theano variable

predicted : array-like, theano variable

Returns:

array-like, theano variable

neupy.algorithms.gd.errors.binary_crossentropy(expected, predicted)[source]

Binary cross-entropy error.

\[crossentropy(t, o) = -(t\cdot log(o) + (1 - t) \cdot log(1 - o))\]

where \(t=expected\) and \(o=predicted\)

Parameters:

expected : array-like, theano variable

predicted : array-like, theano variable

Returns:

array-like, theano variable

neupy.algorithms.gd.errors.categorical_crossentropy(expected, predicted)[source]

Categorical cross-entropy error.

Parameters:

expected : array-like, theano variable

predicted : array-like, theano variable

Returns:

array-like, theano variable

neupy.algorithms.gd.errors.binary_hinge(expected, predicted, delta=1)[source]

Computes the binary hinge loss between predictions and targets.

\[hinge(t, o) = \max(0, \delta - t o)\]

where \(t=expected\) and \(o=predicted\)

Parameters:

expected : Theano tensor

Targets in {-1, 1} such as ground truth labels.

predicted : Theano tensor

Predictions in (-1, 1), such as hyprbolic tangent output of a neural network.

delta : scalar

The hinge loss margin. Defaults to 1.

Returns:

Theano tensor

An expression for the average binary hinge loss.

Notes

This is an alternative to the binary cross-entropy loss for binary classification problems.

neupy.algorithms.gd.errors.categorical_hinge(expected, predicted, delta=1)[source]

Computes the multi-class hinge loss between predictions and targets.

\[hinge_{i}(t, o) = \max_{j \not = o_i} (0, t_j - t_{o_i} + \delta)\]
Parameters:

expected : Theano 2D tensor or 1D tensor

Either a vector of int giving the correct class index per data point or a 2D tensor of one-hot encoding of the correct class in the same layout as predictions (non-binary targets in [0, 1] do not work!).

predicted : Theano 2D tensor

Predictions in (0, 1), such as softmax output of a neural network, with data points in rows and class probabilities in columns.

delta : scalar

The hinge loss margin. Defaults to 1.

Returns:

Theano 1D tensor

An expression for the average multi-class hinge loss.

Notes

This is an alternative to the categorical cross-entropy loss for multi-class classification problems.