neupy.layers.activations module
- class neupy.layers.activations.Linear[source]
Layer with linear activation function. It applies linear transformation when the n_units parameter specified and acts as an identity when it’s not specified.
Parameters: - n_units : int or None
Number of units in the layers. It also corresponds to the number of output features that will be produced per sample after passing it through this layer. The None value means that layer will not have parameters and it will only apply activation function to the input without linear transformation output for the specified input value. Defaults to None.
- weight : array-like, Tensorfow variable, scalar or Initializer
Defines layer’s weights. Default initialization methods you can find here. Defaults to HeNormal().
- bias : 1D array-like, Tensorfow variable, scalar, Initializer or None
Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.
- name : str or None
Layer’s name. Can be used as a reference to specific layer. Name Can be specified as:
- String: Specified name will be used as a direct reference to the layer. For example, name=”fc”
- Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on.
- None: When value specified as None than name will be generated from the class name.
Defaults to None.
Examples
Linear Regression
>>> from neupy.layers import * >>> network = Input(10) >> Linear(5)
Attributes: - variables : dict
Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.
Methods
variable(value, name, shape=None, trainable=True) Initializes variable with specified values. get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape. output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network. activation_function(input) Applies activation function to the input. - activation_function(input_value)[source]
- bias = None[source]
- create_variables(input_shape)[source]
- get_output_shape(input_shape)[source]
- n_units = None[source]
- options = {'bias': Option(class_name='Linear', value=ParameterProperty(name="bias")), 'n_units': Option(class_name='Linear', value=IntProperty(name="n_units")), 'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='Linear', value=ParameterProperty(name="weight"))}[source]
- output(input, **kwargs)[source]
- weight = None[source]
- class neupy.layers.activations.Sigmoid[source]
Layer with the sigmoid used as an activation function. It applies linear transformation when the n_units parameter specified and sigmoid function after the transformation. When n_units is not specified, only sigmoid function will be applied to the input.
Parameters: - n_units : int or None
Number of units in the layers. It also corresponds to the number of output features that will be produced per sample after passing it through this layer. The None value means that layer will not have parameters and it will only apply activation function to the input without linear transformation output for the specified input value. Defaults to None.
- weight : array-like, Tensorfow variable, scalar or Initializer
Defines layer’s weights. Default initialization methods you can find here. Defaults to HeNormal().
- bias : 1D array-like, Tensorfow variable, scalar, Initializer or None
Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.
- name : str or None
Layer’s name. Can be used as a reference to specific layer. Name Can be specified as:
- String: Specified name will be used as a direct reference to the layer. For example, name=”fc”
- Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on.
- None: When value specified as None than name will be generated from the class name.
Defaults to None.
Examples
Logistic Regression (LR)
>>> from neupy.layers import * >>> network = Input(10) >> Sigmoid(1)
Feedforward Neural Networks (FNN)
>>> from neupy.layers import * >>> network = Input(10) >> Sigmoid(5) >> Sigmoid(1)
Convolutional Neural Networks (CNN) for Semantic Segmentation
Sigmoid layer can be used in order to normalize probabilities per pixel in semantic classification task with two classes. In the example below, we have as input 32x32 image that predicts one of the two classes. Sigmoid normalizes raw predictions per pixel to the valid probabilities.
>>> from neupy.layers import * >>> network = Input((32, 32, 1)) >> Sigmoid()
Attributes: - variables : dict
Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.
Methods
variable(value, name, shape=None, trainable=True) Initializes variable with specified values. get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape. output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network. activation_function(input) Applies activation function to the input. - activation_function(input_value)[source]
- options = {'bias': Option(class_name='Linear', value=ParameterProperty(name="bias")), 'n_units': Option(class_name='Linear', value=IntProperty(name="n_units")), 'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='Linear', value=ParameterProperty(name="weight"))}[source]
- class neupy.layers.activations.Tanh[source]
Layer with the hyperbolic tangent used as an activation function. It applies linear transformation when the n_units parameter specified and tanh function after the transformation. When n_units is not specified, only tanh function will be applied to the input.
Parameters: - n_units : int or None
Number of units in the layers. It also corresponds to the number of output features that will be produced per sample after passing it through this layer. The None value means that layer will not have parameters and it will only apply activation function to the input without linear transformation output for the specified input value. Defaults to None.
- weight : array-like, Tensorfow variable, scalar or Initializer
Defines layer’s weights. Default initialization methods you can find here. Defaults to HeNormal().
- bias : 1D array-like, Tensorfow variable, scalar, Initializer or None
Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.
- name : str or None
Layer’s name. Can be used as a reference to specific layer. Name Can be specified as:
- String: Specified name will be used as a direct reference to the layer. For example, name=”fc”
- Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on.
- None: When value specified as None than name will be generated from the class name.
Defaults to None.
Examples
Feedforward Neural Networks (FNN)
>>> from neupy.layers import * >>> network = Input(10) >> Tanh(5)
Attributes: - variables : dict
Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.
Methods
variable(value, name, shape=None, trainable=True) Initializes variable with specified values. get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape. output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network. activation_function(input) Applies activation function to the input. - activation_function(input_value)[source]
- options = {'bias': Option(class_name='Linear', value=ParameterProperty(name="bias")), 'n_units': Option(class_name='Linear', value=IntProperty(name="n_units")), 'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='Linear', value=ParameterProperty(name="weight"))}[source]
- class neupy.layers.activations.Softmax[source]
Layer with the softmax activation function. It applies linear transformation when the n_units parameter specified and softmax function after the transformation. When n_units is not specified, only softmax function will be applied to the input.
Parameters: - n_units : int or None
Number of units in the layers. It also corresponds to the number of output features that will be produced per sample after passing it through this layer. The None value means that layer will not have parameters and it will only apply activation function to the input without linear transformation output for the specified input value. Defaults to None.
- weight : array-like, Tensorfow variable, scalar or Initializer
Defines layer’s weights. Default initialization methods you can find here. Defaults to HeNormal().
- bias : 1D array-like, Tensorfow variable, scalar, Initializer or None
Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.
- name : str or None
Layer’s name. Can be used as a reference to specific layer. Name Can be specified as:
- String: Specified name will be used as a direct reference to the layer. For example, name=”fc”
- Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on.
- None: When value specified as None than name will be generated from the class name.
Defaults to None.
Examples
Feedforward Neural Networks (FNN)
>>> from neupy.layers import * >>> network = Input(10) >> Relu(20) >> Softmax(10)
Convolutional Neural Networks (CNN) for Semantic Segmentation
Softmax layer can be used in order to normalize probabilities per pixel. In the example below, we have as input 32x32 image with raw prediction per each pixel for 10 different classes. Softmax normalizes raw predictions per pixel to the probability distribution.
>>> from neupy.layers import * >>> network = Input((32, 32, 10)) >> Softmax()
Attributes: - variables : dict
Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.
Methods
variable(value, name, shape=None, trainable=True) Initializes variable with specified values. get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape. output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network. activation_function(input) Applies activation function to the input. - activation_function(input_value)[source]
- options = {'bias': Option(class_name='Linear', value=ParameterProperty(name="bias")), 'n_units': Option(class_name='Linear', value=IntProperty(name="n_units")), 'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='Linear', value=ParameterProperty(name="weight"))}[source]
- class neupy.layers.activations.Relu[source]
Layer with the rectifier (ReLu) used as an activation function. It applies linear transformation when the n_units parameter specified and relu function after the transformation. When n_units is not specified, only relu function will be applied to the input.
Parameters: - n_units : int or None
Number of units in the layers. It also corresponds to the number of output features that will be produced per sample after passing it through this layer. The None value means that layer will not have parameters and it will only apply activation function to the input without linear transformation output for the specified input value. Defaults to None.
- alpha : float
Alpha parameter defines the decreasing rate for the negative values. If alpha is non-zero value then layer behave like a leaky ReLu. Defaults to 0.
- weight : array-like, Tensorfow variable, scalar or Initializer
Defines layer’s weights. Default initialization methods you can find here. Defaults to HeNormal(gain=2).
- bias : 1D array-like, Tensorfow variable, scalar, Initializer or None
Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.
- name : str or None
Layer’s name. Can be used as a reference to specific layer. Name Can be specified as:
- String: Specified name will be used as a direct reference to the layer. For example, name=”fc”
- Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on.
- None: When value specified as None than name will be generated from the class name.
Defaults to None.
Examples
Feedforward Neural Networks (FNN)
>>> from neupy.layers import * >>> network = Input(10) >> Relu(20) >> Relu(1)
Convolutional Neural Networks (CNN)
>>> from neupy.layers import * >>> network = join( ... Input((32, 32, 3)), ... Convolution((3, 3, 16)) >> Relu(), ... Convolution((3, 3, 32)) >> Relu(), ... Reshape(), ... Softmax(10), ... )
Attributes: - variables : dict
Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.
Methods
variable(value, name, shape=None, trainable=True) Initializes variable with specified values. get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape. output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network. activation_function(input) Applies activation function to the input. - activation_function(input_value)[source]
- alpha = None[source]
- options = {'alpha': Option(class_name='Relu', value=NumberProperty(name="alpha")), 'bias': Option(class_name='Linear', value=ParameterProperty(name="bias")), 'n_units': Option(class_name='Linear', value=IntProperty(name="n_units")), 'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='Linear', value=ParameterProperty(name="weight"))}[source]
- class neupy.layers.activations.LeakyRelu[source]
Layer with the leaky rectifier (Leaky ReLu) used as an activation function. It applies linear transformation when the n_units parameter specified and leaky relu function after the transformation. When n_units is not specified, only leaky relu function will be applied to the input.
Parameters: - n_units : int or None
Number of units in the layers. It also corresponds to the number of output features that will be produced per sample after passing it through this layer. The None value means that layer will not have parameters and it will only apply activation function to the input without linear transformation output for the specified input value. Defaults to None.
- weight : array-like, Tensorfow variable, scalar or Initializer
Defines layer’s weights. Default initialization methods you can find here. Defaults to HeNormal().
- bias : 1D array-like, Tensorfow variable, scalar, Initializer or None
Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.
- name : str or None
Layer’s name. Can be used as a reference to specific layer. Name Can be specified as:
- String: Specified name will be used as a direct reference to the layer. For example, name=”fc”
- Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on.
- None: When value specified as None than name will be generated from the class name.
Defaults to None.
Notes
Do the same as Relu(input_size, alpha=0.01).
Examples
Feedforward Neural Networks (FNN)
>>> from neupy.layers import * >>> network = Input(10) >> LeakyRelu(20) >> LeakyRelu(1)
Attributes: - variables : dict
Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.
Methods
variable(value, name, shape=None, trainable=True) Initializes variable with specified values. get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape. output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network. activation_function(input) Applies activation function to the input. - activation_function(input_value)[source]
- options = {'bias': Option(class_name='Linear', value=ParameterProperty(name="bias")), 'n_units': Option(class_name='Linear', value=IntProperty(name="n_units")), 'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='Linear', value=ParameterProperty(name="weight"))}[source]
- class neupy.layers.activations.Elu[source]
Layer with the exponential linear unit (ELU) used as an activation function. It applies linear transformation when the n_units parameter specified and elu function after the transformation. When n_units is not specified, only elu function will be applied to the input.
Parameters: - n_units : int or None
Number of units in the layers. It also corresponds to the number of output features that will be produced per sample after passing it through this layer. The None value means that layer will not have parameters and it will only apply activation function to the input without linear transformation output for the specified input value. Defaults to None.
- weight : array-like, Tensorfow variable, scalar or Initializer
Defines layer’s weights. Default initialization methods you can find here. Defaults to HeNormal().
- bias : 1D array-like, Tensorfow variable, scalar, Initializer or None
Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.
- name : str or None
Layer’s name. Can be used as a reference to specific layer. Name Can be specified as:
- String: Specified name will be used as a direct reference to the layer. For example, name=”fc”
- Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on.
- None: When value specified as None than name will be generated from the class name.
Defaults to None.
References
[1] http://arxiv.org/pdf/1511.07289v3.pdf Examples
Feedforward Neural Networks (FNN)
>>> from neupy.layers import * >>> network = Input(10) >> Elu(5) >> Elu(1)
Attributes: - variables : dict
Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.
Methods
variable(value, name, shape=None, trainable=True) Initializes variable with specified values. get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape. output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network. activation_function(input) Applies activation function to the input. - activation_function(input_value)[source]
- options = {'bias': Option(class_name='Linear', value=ParameterProperty(name="bias")), 'n_units': Option(class_name='Linear', value=IntProperty(name="n_units")), 'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='Linear', value=ParameterProperty(name="weight"))}[source]
- class neupy.layers.activations.PRelu[source]
Layer with the parametrized ReLu used as an activation function. Layer learns additional parameter alpha during the training.
It applies linear transformation when the n_units parameter specified and parametrized relu function after the transformation. When n_units is not specified, only parametrized relu function will be applied to the input.
Parameters: - alpha_axes : int or tuple
Axes that will not include unique alpha parameter. Single integer value defines the same as a tuple with one value. Defaults to -1.
- alpha : array-like, Tensorfow variable, scalar or Initializer
Separate alpha parameter per each non-shared axis for the ReLu. Scalar value means that each element in the tensor will be equal to the specified value. Default initialization methods you can find here. Defaults to Constant(value=0.25).
- n_units : int or None
Number of units in the layers. It also corresponds to the number of output features that will be produced per sample after passing it through this layer. The None value means that layer will not have parameters and it will only apply activation function to the input without linear transformation output for the specified input value. Defaults to None.
- weight : array-like, Tensorfow variable, scalar or Initializer
Defines layer’s weights. Default initialization methods you can find here. Defaults to HeNormal().
- bias : 1D array-like, Tensorfow variable, scalar, Initializer or None
Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.
- name : str or None
Layer’s name. Can be used as a reference to specific layer. Name Can be specified as:
- String: Specified name will be used as a direct reference to the layer. For example, name=”fc”
- Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on.
- None: When value specified as None than name will be generated from the class name.
Defaults to None.
References
[1] Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. https://arxiv.org/pdf/1502.01852v1.pdf Examples
Feedforward Neural Networks (FNN)
>>> from neupy.layers import * >>> network = Input(10) >> PRelu(20) >> PRelu(1)
Convolutional Neural Networks (CNN)
>>> from neupy.layers import * >>> network = join( ... Input((32, 32, 3)), ... Convolution((3, 3, 16)) >> PRelu(), ... Convolution((3, 3, 32)) >> PRelu(), ... Reshape(), ... Softmax(10), ... )
Attributes: - variables : dict
Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.
Methods
variable(value, name, shape=None, trainable=True) Initializes variable with specified values. get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape. output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network. activation_function(input) Applies activation function to the input. - activation_function(input)[source]
- alpha = None[source]
- alpha_axes = None[source]
- create_variables(input_shape)[source]
- get_output_shape(input_shape)[source]
- options = {'alpha': Option(class_name='PRelu', value=ParameterProperty(name="alpha")), 'alpha_axes': Option(class_name='PRelu', value=TypedListProperty(name="alpha_axes")), 'bias': Option(class_name='Linear', value=ParameterProperty(name="bias")), 'n_units': Option(class_name='Linear', value=IntProperty(name="n_units")), 'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='Linear', value=ParameterProperty(name="weight"))}[source]
- class neupy.layers.activations.Softplus[source]
Layer with the softplus used as an activation function. It applies linear transformation when the n_units parameter specified and softplus function after the transformation. When n_units is not specified, only softplus function will be applied to the input.
Parameters: - n_units : int or None
Number of units in the layers. It also corresponds to the number of output features that will be produced per sample after passing it through this layer. The None value means that layer will not have parameters and it will only apply activation function to the input without linear transformation output for the specified input value. Defaults to None.
- weight : array-like, Tensorfow variable, scalar or Initializer
Defines layer’s weights. Default initialization methods you can find here. Defaults to HeNormal().
- bias : 1D array-like, Tensorfow variable, scalar, Initializer or None
Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.
- name : str or None
Layer’s name. Can be used as a reference to specific layer. Name Can be specified as:
- String: Specified name will be used as a direct reference to the layer. For example, name=”fc”
- Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on.
- None: When value specified as None than name will be generated from the class name.
Defaults to None.
Examples
Feedforward Neural Networks (FNN)
>>> from neupy.layers import * >>> network = Input(10) >> Softplus(4)
Attributes: - variables : dict
Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.
Methods
variable(value, name, shape=None, trainable=True) Initializes variable with specified values. get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape. output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network. activation_function(input) Applies activation function to the input. - activation_function(input_value)[source]
- options = {'bias': Option(class_name='Linear', value=ParameterProperty(name="bias")), 'n_units': Option(class_name='Linear', value=IntProperty(name="n_units")), 'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='Linear', value=ParameterProperty(name="weight"))}[source]
- class neupy.layers.activations.HardSigmoid[source]
Layer with the hard sigmoid used as an activation function. It applies linear transformation when the n_units parameter specified and hard sigmoid function after the transformation. When n_units is not specified, only hard sigmoid function will be applied to the input.
Parameters: - n_units : int or None
Number of units in the layers. It also corresponds to the number of output features that will be produced per sample after passing it through this layer. The None value means that layer will not have parameters and it will only apply activation function to the input without linear transformation output for the specified input value. Defaults to None.
- weight : array-like, Tensorfow variable, scalar or Initializer
Defines layer’s weights. Default initialization methods you can find here. Defaults to HeNormal().
- bias : 1D array-like, Tensorfow variable, scalar, Initializer or None
Defines layer’s bias. Default initialization methods you can find here. Defaults to Constant(0). The None value excludes bias from the calculations and do not add it into parameters list.
- name : str or None
Layer’s name. Can be used as a reference to specific layer. Name Can be specified as:
- String: Specified name will be used as a direct reference to the layer. For example, name=”fc”
- Format string: Name pattern could be defined as a format string and specified field will be replaced with an index. For example, name=”fc{}” will be replaced with fc1, fc2 and so on. A bit more complex formatting methods are acceptable, for example, name=”fc-{:<03d}” will be converted to fc-001, fc-002, fc-003 and so on.
- None: When value specified as None than name will be generated from the class name.
Defaults to None.
Examples
Feedforward Neural Networks (FNN)
>>> from neupy.layers import * >>> network = Input(10) >> HardSigmoid(5)
Attributes: - variables : dict
Variable names and their values. Dictionary can be empty in case if variables hasn’t been created yet.
Methods
variable(value, name, shape=None, trainable=True) Initializes variable with specified values. get_output_shape(input_shape) Computes expected output shape from the layer based on the specified input shape. output(*inputs, **kwargs) Propagates input through the layer. The kwargs variable might contain additional information that propagates through the network. activation_function(input) Applies activation function to the input. - activation_function(input_value)[source]
- options = {'bias': Option(class_name='Linear', value=ParameterProperty(name="bias")), 'n_units': Option(class_name='Linear', value=IntProperty(name="n_units")), 'name': Option(class_name='BaseLayer', value=Property(name="name")), 'weight': Option(class_name='Linear', value=ParameterProperty(name="weight"))}[source]