neupy.layers.activations module

class neupy.layers.activations.ActivationLayer[source]

Base class for the layers based on the activation functions.

Parameters:

size : int or None

Layer input size. None means that layer will not create parameters and will return only activation function output for the specified input value.

weight : array-like, Theano variable, scalar or Initializer

Defines layer’s weights. Default initialization methods you can find here. Defaults to XavierNormal().

bias : 1D array-like, Theano variable, scalar, Initializer or None

Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.

name : str or None

Layer’s identifier. If name is equal to None than name will be generated automatically. Defaults to None.

Attributes

input_shape (tuple) Layer’s input shape.
output_shape (tuple) Layer’s output shape.
training_state (bool) Defines whether layer in training state or not.
parameters (dict) Trainable parameters.
graph (LayerGraph instance) Graphs that stores all relations between layers.

Methods

disable_training_state() Swith off trainig state.
initialize() Set up important configurations related to the layer.
initialize()[source]

Initialize connection

options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="weight")), 'bias': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="bias")), 'size': Option(class_name='ActivationLayer', value=IntProperty(name="size"))}[source]
output(input_value)[source]

Return output base on the input value.

Parameters:input_value
output_shape[source]
size = None[source]
class neupy.layers.activations.Linear[source]

The layer with the linear activation function.

Parameters:

size : int or None

Layer input size. None means that layer will not create parameters and will return only activation function output for the specified input value.

weight : array-like, Theano variable, scalar or Initializer

Defines layer’s weights. Default initialization methods you can find here. Defaults to XavierNormal().

bias : 1D array-like, Theano variable, scalar, Initializer or None

Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.

name : str or None

Layer’s identifier. If name is equal to None than name will be generated automatically. Defaults to None.

Attributes

input_shape (tuple) Layer’s input shape.
output_shape (tuple) Layer’s output shape.
training_state (bool) Defines whether layer in training state or not.
parameters (dict) Trainable parameters.
graph (LayerGraph instance) Graphs that stores all relations between layers.

Methods

disable_training_state() Swith off trainig state.
initialize() Set up important configurations related to the layer.
activation_function(input_value)[source]
options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="weight")), 'bias': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="bias")), 'size': Option(class_name='ActivationLayer', value=IntProperty(name="size"))}[source]
class neupy.layers.activations.Sigmoid[source]

The layer with the sigmoid activation function.

Parameters:

size : int or None

Layer input size. None means that layer will not create parameters and will return only activation function output for the specified input value.

weight : array-like, Theano variable, scalar or Initializer

Defines layer’s weights. Default initialization methods you can find here. Defaults to XavierNormal().

bias : 1D array-like, Theano variable, scalar, Initializer or None

Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.

name : str or None

Layer’s identifier. If name is equal to None than name will be generated automatically. Defaults to None.

Attributes

input_shape (tuple) Layer’s input shape.
output_shape (tuple) Layer’s output shape.
training_state (bool) Defines whether layer in training state or not.
parameters (dict) Trainable parameters.
graph (LayerGraph instance) Graphs that stores all relations between layers.

Methods

disable_training_state() Swith off trainig state.
initialize() Set up important configurations related to the layer.
activation_function(input_value)[source]
options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="weight")), 'bias': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="bias")), 'size': Option(class_name='ActivationLayer', value=IntProperty(name="size"))}[source]
class neupy.layers.activations.HardSigmoid[source]

The layer with the hard sigmoid activation function.

Parameters:

size : int or None

Layer input size. None means that layer will not create parameters and will return only activation function output for the specified input value.

weight : array-like, Theano variable, scalar or Initializer

Defines layer’s weights. Default initialization methods you can find here. Defaults to XavierNormal().

bias : 1D array-like, Theano variable, scalar, Initializer or None

Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.

name : str or None

Layer’s identifier. If name is equal to None than name will be generated automatically. Defaults to None.

Attributes

input_shape (tuple) Layer’s input shape.
output_shape (tuple) Layer’s output shape.
training_state (bool) Defines whether layer in training state or not.
parameters (dict) Trainable parameters.
graph (LayerGraph instance) Graphs that stores all relations between layers.

Methods

disable_training_state() Swith off trainig state.
initialize() Set up important configurations related to the layer.
activation_function(input_value)[source]
options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="weight")), 'bias': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="bias")), 'size': Option(class_name='ActivationLayer', value=IntProperty(name="size"))}[source]
class neupy.layers.activations.Step[source]

The layer with the the step activation function.

Parameters:

size : int or None

Layer input size. None means that layer will not create parameters and will return only activation function output for the specified input value.

weight : array-like, Theano variable, scalar or Initializer

Defines layer’s weights. Default initialization methods you can find here. Defaults to XavierNormal().

bias : 1D array-like, Theano variable, scalar, Initializer or None

Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.

name : str or None

Layer’s identifier. If name is equal to None than name will be generated automatically. Defaults to None.

Attributes

input_shape (tuple) Layer’s input shape.
output_shape (tuple) Layer’s output shape.
training_state (bool) Defines whether layer in training state or not.
parameters (dict) Trainable parameters.
graph (LayerGraph instance) Graphs that stores all relations between layers.

Methods

disable_training_state() Swith off trainig state.
initialize() Set up important configurations related to the layer.
activation_function(input_value)[source]
options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="weight")), 'bias': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="bias")), 'size': Option(class_name='ActivationLayer', value=IntProperty(name="size"))}[source]
class neupy.layers.activations.Tanh[source]

The layer with the tanh activation function.

Parameters:

size : int or None

Layer input size. None means that layer will not create parameters and will return only activation function output for the specified input value.

weight : array-like, Theano variable, scalar or Initializer

Defines layer’s weights. Default initialization methods you can find here. Defaults to XavierNormal().

bias : 1D array-like, Theano variable, scalar, Initializer or None

Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.

name : str or None

Layer’s identifier. If name is equal to None than name will be generated automatically. Defaults to None.

Attributes

input_shape (tuple) Layer’s input shape.
output_shape (tuple) Layer’s output shape.
training_state (bool) Defines whether layer in training state or not.
parameters (dict) Trainable parameters.
graph (LayerGraph instance) Graphs that stores all relations between layers.

Methods

disable_training_state() Swith off trainig state.
initialize() Set up important configurations related to the layer.
activation_function(input_value)[source]
options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="weight")), 'bias': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="bias")), 'size': Option(class_name='ActivationLayer', value=IntProperty(name="size"))}[source]
class neupy.layers.activations.Relu[source]

The layer with the rectifier (ReLu) activation function.

Parameters:

alpha : float

Alpha parameter defines the decreasing rate for the negative values. If alpha is non-zero value then layer behave like a leaky ReLu. Defaults to 0.

size : int or None

Layer input size. None means that layer will not create parameters and will return only activation function output for the specified input value.

weight : array-like, Theano variable, scalar or Initializer

Defines layer’s weights. Default initialization methods you can find here. Defaults to XavierNormal().

bias : 1D array-like, Theano variable, scalar, Initializer or None

Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.

name : str or None

Layer’s identifier. If name is equal to None than name will be generated automatically. Defaults to None.

Attributes

input_shape (tuple) Layer’s input shape.
output_shape (tuple) Layer’s output shape.
training_state (bool) Defines whether layer in training state or not.
parameters (dict) Trainable parameters.
graph (LayerGraph instance) Graphs that stores all relations between layers.

Methods

disable_training_state() Swith off trainig state.
initialize() Set up important configurations related to the layer.
activation_function(input_value)[source]
alpha = None[source]
options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='Relu', value=ParameterProperty(name="weight")), 'bias': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="bias")), 'alpha': Option(class_name='Relu', value=NumberProperty(name="alpha")), 'size': Option(class_name='ActivationLayer', value=IntProperty(name="size"))}[source]
weight = None[source]
class neupy.layers.activations.Softplus[source]

The layer with the softplus activation function.

Parameters:

size : int or None

Layer input size. None means that layer will not create parameters and will return only activation function output for the specified input value.

weight : array-like, Theano variable, scalar or Initializer

Defines layer’s weights. Default initialization methods you can find here. Defaults to XavierNormal().

bias : 1D array-like, Theano variable, scalar, Initializer or None

Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.

name : str or None

Layer’s identifier. If name is equal to None than name will be generated automatically. Defaults to None.

Attributes

input_shape (tuple) Layer’s input shape.
output_shape (tuple) Layer’s output shape.
training_state (bool) Defines whether layer in training state or not.
parameters (dict) Trainable parameters.
graph (LayerGraph instance) Graphs that stores all relations between layers.

Methods

disable_training_state() Swith off trainig state.
initialize() Set up important configurations related to the layer.
activation_function(input_value)[source]
options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="weight")), 'bias': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="bias")), 'size': Option(class_name='ActivationLayer', value=IntProperty(name="size"))}[source]
class neupy.layers.activations.Softmax[source]

The layer with the softmax activation function.

Parameters:

size : int or None

Layer input size. None means that layer will not create parameters and will return only activation function output for the specified input value.

weight : array-like, Theano variable, scalar or Initializer

Defines layer’s weights. Default initialization methods you can find here. Defaults to XavierNormal().

bias : 1D array-like, Theano variable, scalar, Initializer or None

Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.

name : str or None

Layer’s identifier. If name is equal to None than name will be generated automatically. Defaults to None.

Attributes

input_shape (tuple) Layer’s input shape.
output_shape (tuple) Layer’s output shape.
training_state (bool) Defines whether layer in training state or not.
parameters (dict) Trainable parameters.
graph (LayerGraph instance) Graphs that stores all relations between layers.

Methods

disable_training_state() Swith off trainig state.
initialize() Set up important configurations related to the layer.
activation_function(input_value)[source]
options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="weight")), 'bias': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="bias")), 'size': Option(class_name='ActivationLayer', value=IntProperty(name="size"))}[source]
class neupy.layers.activations.Elu[source]

The layer with the exponensial linear unit (ELU) activation function.

Parameters:

alpha : float

Alpha parameter defines the decreasing exponensial rate for the negative values. Defaults to 1.

size : int or None

Layer input size. None means that layer will not create parameters and will return only activation function output for the specified input value.

weight : array-like, Theano variable, scalar or Initializer

Defines layer’s weights. Default initialization methods you can find here. Defaults to XavierNormal().

bias : 1D array-like, Theano variable, scalar, Initializer or None

Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.

name : str or None

Layer’s identifier. If name is equal to None than name will be generated automatically. Defaults to None.

References

[R1]http://arxiv.org/pdf/1511.07289v3.pdf

Attributes

input_shape (tuple) Layer’s input shape.
output_shape (tuple) Layer’s output shape.
training_state (bool) Defines whether layer in training state or not.
parameters (dict) Trainable parameters.
graph (LayerGraph instance) Graphs that stores all relations between layers.

Methods

disable_training_state() Swith off trainig state.
initialize() Set up important configurations related to the layer.
activation_function(input_value)[source]
alpha = None[source]
options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="weight")), 'bias': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="bias")), 'alpha': Option(class_name='Elu', value=NumberProperty(name="alpha")), 'size': Option(class_name='ActivationLayer', value=IntProperty(name="size"))}[source]
class neupy.layers.activations.PRelu[source]

The layer with the parametrized ReLu activation function.

Parameters:

alpha_axes : int or tuple

Axes that will not include unique alpha parameter. Single integer value defines the same as a tuple with one value. Defaults to 1.

alpha : array-like, Theano shared variable, scalar or Initializer

Alpha parameter per each non-shared axis for the ReLu. Scalar value means that each element in the tensor will be equal to the specified value. Default initialization methods you can find here. Defaults to Constant(value=0.25).

size : int or None

Layer input size. None means that layer will not create parameters and will return only activation function output for the specified input value.

weight : array-like, Theano variable, scalar or Initializer

Defines layer’s weights. Default initialization methods you can find here. Defaults to XavierNormal().

bias : 1D array-like, Theano variable, scalar, Initializer or None

Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.

name : str or None

Layer’s identifier. If name is equal to None than name will be generated automatically. Defaults to None.

References

[R2]https://arxiv.org/pdf/1502.01852v1.pdf

Attributes

input_shape (tuple) Layer’s input shape.
output_shape (tuple) Layer’s output shape.
training_state (bool) Defines whether layer in training state or not.
parameters (dict) Trainable parameters.
graph (LayerGraph instance) Graphs that stores all relations between layers.

Methods

disable_training_state() Swith off trainig state.
initialize() Set up important configurations related to the layer.
activation_function(input_value)[source]
alpha = None[source]
alpha_axes = None[source]
initialize()[source]

Initialize connection

options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="weight")), 'bias': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="bias")), 'alpha': Option(class_name='PRelu', value=ParameterProperty(name="alpha")), 'alpha_axes': Option(class_name='PRelu', value=AxesProperty(name="alpha_axes")), 'size': Option(class_name='ActivationLayer', value=IntProperty(name="size"))}[source]
validate(input_shape)[source]

Validate input shape value before assigning it.

Parameters:input_shape : tuple with int
class neupy.layers.activations.LeakyRelu[source]

The layer with the leaky rectifier (Leaky ReLu) activation function.

Parameters:

size : int or None

Layer input size. None means that layer will not create parameters and will return only activation function output for the specified input value.

weight : array-like, Theano variable, scalar or Initializer

Defines layer’s weights. Default initialization methods you can find here. Defaults to XavierNormal().

bias : 1D array-like, Theano variable, scalar, Initializer or None

Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.

name : str or None

Layer’s identifier. If name is equal to None than name will be generated automatically. Defaults to None.

Notes

Do the same as layers.Relu(input_size, alpha=0.01).

Attributes

input_shape (tuple) Layer’s input shape.
output_shape (tuple) Layer’s output shape.
training_state (bool) Defines whether layer in training state or not.
parameters (dict) Trainable parameters.
graph (LayerGraph instance) Graphs that stores all relations between layers.

Methods

disable_training_state() Swith off trainig state.
initialize() Set up important configurations related to the layer.
activation_function(input_value)[source]
options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="weight")), 'bias': Option(class_name='ParameterBasedLayer', value=ParameterProperty(name="bias")), 'size': Option(class_name='ActivationLayer', value=IntProperty(name="size"))}[source]