class neupy.algorithms.Oja[source]

Oja is an unsupervised technique used for the dimensionality reduction tasks.

minimized_data_size : int

Expected number of features after minimization, defaults to 1.

weight : array-like or None

Defines networks weights. Defaults to XavierNormal().

step : float

Learning rate, defaults to 0.1.

show_epoch : int or str

This property controls how often the network will display information about training. There are two main syntaxes for this property.

  • You can define it as a positive integer number. It defines how offen would you like to see summary output in terminal. For instance, number 100 mean that network shows summary at 100th, 200th, 300th … epochs.
  • String defines number of times you want to see output in terminal. For instance, value '2 times' mean that the network will show output twice with approximately equal period of epochs and one additional output would be after the finall epoch.

Defaults to 1.

epoch_end_signal : function

Calls this function when train epoch finishes.

train_end_signal : function

Calls this function when train process finishes.

verbose : bool

Property controls verbose output interminal. True enables informative output in the terminal and False - disable it. Defaults to False.

  • Triggers when you try to reconstruct output without training.
  • Invalid number of input data features for the train and reconstruct methods.


  • In practice use step as very small value. For instance, value 1e-7 can be a good choice.
  • Normalize the input data before use Oja algorithm. Input data shouldn’t contains large values.
  • Set up smaller values for weight if error for a few first iterations is big compare to the input values scale. For instance, if your input data have values between 0 and 1 error value equal to 100 is big.


>>> import numpy as np
>>> from neupy import algorithms
>>> data = np.array([[2, 2], [1, 1], [4, 4], [5, 5]])
>>> ojanet = algorithms.Oja(
...     minimized_data_size=1,
...     step=0.01,
...     verbose=False
... )
>>> ojanet.train(data, epsilon=1e-5)
>>> minimized = ojanet.predict(data)
>>> minimized
>>> ojanet.reconstruct(minimized)
array([[ 2.00000046,  2.00000046],
       [ 1.00000023,  1.00000023],
       [ 4.00000093,  4.00000093],
       [ 5.00000116,  5.00000116]])


reconstruct(input_data) Reconstruct original dataset from the minimized input.
train(input_data, epsilon=1e-2, epochs=100) Trains algorithm based on the input dataset. For the dimensionality reduction input dataset assumes to be also a target.
predict(input_data) Predicts output for the specified input.
fit(*args, **kwargs) Alias to the train method.
minimized_data_size = None[source]
options = {'epoch_end_signal': Option(class_name='BaseNetwork', value=Property(name="epoch_end_signal")), 'minimized_data_size': Option(class_name='Oja', value=IntProperty(name="minimized_data_size")), 'show_epoch': Option(class_name='BaseNetwork', value=ShowEpochProperty(name="show_epoch")), 'shuffle_data': Option(class_name='BaseNetwork', value=Property(name="shuffle_data")), 'step': Option(class_name='BaseNetwork', value=NumberProperty(name="step")), 'train_end_signal': Option(class_name='BaseNetwork', value=Property(name="train_end_signal")), 'verbose': Option(class_name='Verbose', value=VerboseProperty(name="verbose")), 'weight': Option(class_name='Oja', value=ParameterProperty(name="weight"))}[source]

Return prediction results for the input data.

input_data : array-like
train(input_data, epsilon=0.01, epochs=100)[source]

Method train neural network.

input_train : array-like
target_train : array-like or None
input_test : array-like or None
target_test : array-like or None
epochs : int

Defaults to 100.

epsilon : float or None

Defaults to None.

train_epoch(input_data, target_train)[source]
weight = None[source]