# neupy.layers.LocalResponseNorm

class neupy.layers.LocalResponseNorm[source]

Local Response Normalization Layer.

Aggregation is purely across channels, not within channels, and performed “pixelwise”.

If the value of the $$i$$ th channel is $$x_i$$, the output is

$x_i = \frac{x_i}{ (k + ( \alpha \sum_j x_j^2 ))^\beta }$

where the summation is performed over this position on $$n$$ neighboring channels.

Parameters: alpha : float Coefficient, see equation above. Defaults to 1e-4. beta : float Offset, see equation above. Defaults to 0.75. k : float Exponent, see equation above. Defaults to 2. depth_radius : int Number of adjacent channels to normalize over, must be odd. Defaults to 5. name : str or None Layer’s name. Can be used as a reference to specific layer. Name Can be specified as: String: Specified name will be used as a direct reference to the layer. For example, name=”fc” Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on. None: When value specified as None than name will be generated from the class name. Defaults to None.

Examples

>>> from neupy.layers import *
>>> network = Input((10, 10, 12)) >> LocalResponseNorm()

Attributes: variables : dict Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.

Methods

 variable(value, name, shape=None, trainable=True) Initializes variable with specified values. get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape. output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network.
alpha = None[source]
beta = None[source]