neupy.layers.stochastic module

class neupy.layers.stochastic.Dropout[source]

Dropout layer. It randomly switches of (multiplies by zero) input values, where probability to be switched per each value can be controlled with the proba parameter. For example, proba=0.2 will mean that only 20% of the input values will be multiplied by 0 and 80% of the will be unchanged.

It’s important to note that output from the dropout is controlled by the training parameter in the output method. Dropout will be applied only in cases when training=True propagated through the network, otherwise it will act as an identity.

Parameters:
proba : float

Fraction of the input units to drop. Value needs to be between 0 and 1.

name : str or None

Layer’s name. Can be used as a reference to specific layer. Name Can be specified as:

  • String: Specified name will be used as a direct reference to the layer. For example, name=”fc”
  • Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on.
  • None: When value specified as None than name will be generated from the class name.

Defaults to None.

See also

DropBlock
DropBlock layer.

References

[1] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever,
Ruslan Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting, 2014.

Examples

>>> from neupy.layers import *
>>> network = join(
...     Input(10),
...     Relu(5) >> Dropout(0.5),
...     Relu(5) >> Dropout(0.5),
...     Sigmoid(1),
... )
>>> network
(?, 10) -> [... 6 layers ...] -> (?, 1)
Attributes:
variables : dict

Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.

Methods

variable(value, name, shape=None, trainable=True) Initializes variable with specified values.
get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape.
output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network.
options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'proba': Option(class_name='Dropout', value=ProperFractionProperty(name="proba"))}[source]
output(input_value, training=False)[source]
proba = None[source]
class neupy.layers.stochastic.GaussianNoise[source]

Add gaussian noise to the input value. Mean and standard deviation of the noise can be controlled from the layers parameters.

It’s important to note that output from the layer is controled by the training parameter in the output method. Layer will be applied only in cases when training=True propagated through the network, otherwise it will act as an identity.

Parameters:
std : float

Standard deviation of the gaussian noise. Values needs to be greater than zero. Defaults to 1.

mean : float

Mean of the gaussian noise. Defaults to 0.

name : str or None

Layer’s name. Can be used as a reference to specific layer. Name Can be specified as:

  • String: Specified name will be used as a direct reference to the layer. For example, name=”fc”
  • Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on.
  • None: When value specified as None than name will be generated from the class name.

Defaults to None.

Examples

>>> from neupy.layers import *
>>> network = join(
...     Input(10),
...     Relu(5) >> GaussianNoise(std=0.1),
...     Relu(5) >> GaussianNoise(std=0.1),
...     Sigmoid(1),
... )
>>> network
(?, 10) -> [... 6 layers ...] -> (?, 1)
Attributes:
variables : dict

Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.

Methods

variable(value, name, shape=None, trainable=True) Initializes variable with specified values.
get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape.
output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network.
mean = None[source]
options = {'mean': Option(class_name='GaussianNoise', value=NumberProperty(name="mean")), 'name': Option(class_name='BaseLayer', value=Property(name="name")), 'std': Option(class_name='GaussianNoise', value=NumberProperty(name="std"))}[source]
output(input_value, training=False)[source]
std = None[source]
class neupy.layers.stochastic.DropBlock[source]

DropBlock, a form of structured dropout, where units in a contiguous region of a feature map are dropped together.

Parameters:
keep_proba : float

Fraction of the input units to keep. Value needs to be between 0 and 1.

block_size : int or tuple

Size of the block to be dropped. Blocks that have squared shape can be specified with a single integer value. For example, block_size=5 the same as block_size=(5, 5).

name : str or None

Layer’s name. Can be used as a reference to specific layer. Name Can be specified as:

  • String: Specified name will be used as a direct reference to the layer. For example, name=”fc”
  • Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on.
  • None: When value specified as None than name will be generated from the class name.

Defaults to None.

See also

Dropout
Dropout layer.

References

[1] Golnaz Ghiasi, Tsung-Yi Lin, Quoc V. Le. DropBlock: A regularization
method for convolutional networks, 2018.

Examples

>>> from neupy.layers import *
>>> network = join(
...     Input((28, 28, 1)),
...
...     Convolution((3, 3, 16)) >> Relu(),
...     DropBlock(keep_proba=0.1, block_size=5),
...
...     Convolution((3, 3, 32)) >> Relu(),
...     DropBlock(keep_proba=0.1, block_size=5),
... )
Attributes:
variables : dict

Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.

Methods

variable(value, name, shape=None, trainable=True) Initializes variable with specified values.
get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape.
output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network.
block_size = None[source]
get_output_shape(input_shape)[source]
keep_proba = None[source]
options = {'block_size': Option(class_name='DropBlock', value=TypedListProperty(name="block_size")), 'keep_proba': Option(class_name='DropBlock', value=ProperFractionProperty(name="keep_proba")), 'name': Option(class_name='BaseLayer', value=Property(name="name"))}[source]
output(input, training=False)[source]