# neupy.layers.Dropout

class neupy.layers.Dropout[source]

Dropout layer. It randomly switches of (multiplies by zero) input values, where probability to be switched per each value can be controlled with the proba parameter. For example, proba=0.2 will mean that only 20% of the input values will be multiplied by 0 and 80% of the will be unchanged.

It’s important to note that output from the dropout is controlled by the training parameter in the output method. Dropout will be applied only in cases when training=True propagated through the network, otherwise it will act as an identity.

Parameters: proba : float Fraction of the input units to drop. Value needs to be between 0 and 1. name : str or None Layer’s name. Can be used as a reference to specific layer. Name Can be specified as: String: Specified name will be used as a direct reference to the layer. For example, name=”fc” Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on. None: When value specified as None than name will be generated from the class name. Defaults to None.

DropBlock
DropBlock layer.

References

[1] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever,
Ruslan Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting, 2014.

Examples

>>> from neupy.layers import *
>>> network = join(
...     Input(10),
...     Relu(5) >> Dropout(0.5),
...     Relu(5) >> Dropout(0.5),
...     Sigmoid(1),
... )
>>> network
(?, 10) -> [... 6 layers ...] -> (?, 1)

Attributes: variables : dict Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.

Methods

 variable(value, name, shape=None, trainable=True) Initializes variable with specified values. get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape. output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network.
options = {'name': Option(class_name='BaseLayer', value=Property(name="name")), 'proba': Option(class_name='Dropout', value=ProperFractionProperty(name="proba"))}[source]
output(input_value, training=False)[source]
proba = None[source]