neupy.layers.LeakyRelu

class neupy.layers.LeakyRelu[source]

Layer with the leaky rectifier (Leaky ReLu) used as an activation function. It applies linear transformation when the n_units parameter specified and leaky relu function after the transformation. When n_units is not specified, only leaky relu function will be applied to the input.

Parameters:
n_units : int or None

Number of units in the layers. It also corresponds to the number of output features that will be produced per sample after passing it through this layer. The None value means that layer will not have parameters and it will only apply activation function to the input without linear transformation output for the specified input value. Defaulst to None.

weight : array-like, Tensorfow variable, scalar or Initializer

Defines layer’s weights. Default initialization methods you can find here. Defaults to HeNormal().

bias : 1D array-like, Tensorfow variable, scalar, Initializer or None

Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.

name : str or None

Layer’s name. Can be used as a reference to specific layer. When value specified as None than name will be generated from the class name. Defaults to None

Notes

Do the same as Relu(input_size, alpha=0.01).

Examples

Feedforward Neural Networks (FNN)

>>> from neupy.layers import *
>>> network = Input(10) >> LeakyRelu(20) >> LeakyRelu(1)
Attributes:
variables : dict

Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.

Methods

variable(value, name, shape=None, trainable=True) Initializes variable with specified values.
get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape.
output(*inputs, **kwargs) Propagetes input through the layer. The kwargs variable might contain additional information that propages through the network.
activation_function(input) Applies activation function to the input.
activation_function(input_value)[source]
options = {'bias': Option(class_name='Linear', value=ParameterProperty(name="bias")), 'n_units': Option(class_name='Linear', value=IntProperty(name="n_units")), 'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='Linear', value=ParameterProperty(name="weight"))}[source]