neupy.algorithms.Adagrad
- class neupy.algorithms.Adagrad[source]
- Adagrad algorithm. - Parameters: - batch_size : int or None
- Set up min-batch size. The None value will ensure that all data samples will be propagated through the network at once. Defaults to 128. 
- network : list, tuple or LayerConnection instance
- Network’s architecture. There are a few ways to define it. - List of layers. For instance, [Input(2), Tanh(4), Relu(1)].
- Constructed layers. For instance, Input(2) >> Tanh(4) >> Relu(1).
 
- regularizer : function or None
- Network’s regularizer. 
- loss : str or function
- Error/loss function. Defaults to mse. - mae - Mean Absolute Error.
- mse - Mean Squared Error.
- rmse - Root Mean Squared Error.
- msle - Mean Squared Logarithmic Error.
- rmsle - Root Mean Squared Logarithmic Error.
- categorical_crossentropy - Categorical cross entropy.
- binary_crossentropy - Binary cross entropy.
- binary_hinge - Binary hinge entropy.
- categorical_hinge - Categorical hinge entropy.
- Custom function which accepts two mandatory arguments. The first one is expected value and the second one is predicted value. Example:
 - def custom_func(expected, predicted): return expected - predicted 
- step : float, Variable
- Learning rate, defaults to 0.1. 
- show_epoch : int
- This property controls how often the network will display information about training. It has to be defined as positive integer. For instance, number 100 mean that network shows summary at 1st, 100th, 200th, 300th … and last epochs. - Defaults to 1. 
- shuffle_data : bool
- If it’s True than training data will be shuffled before the training. Defaults to True. 
- signals : dict, list or function
- Function that will be triggered after certain events during the training. 
- verbose : bool
- Property controls verbose output in terminal. The True value enables informative output in the terminal and False - disable it. Defaults to False. 
 - References - [1] John Duchi, Elad Hazan, Yoram Singer,
- Adaptive Subgradient Methods for Online Learning and Stochastic Optimization http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf
 - Examples - >>> import numpy as np >>> from neupy import algorithms >>> from neupy.layers import * >>> >>> x_train = np.array([[1, 2], [3, 4]]) >>> y_train = np.array([[1], [0]]) >>> >>> network = Input(2) >> Sigmoid(3) >> Sigmoid(1) >>> optimizer = algorithms.Adagrad(network) >>> optimizer.train(x_train, y_train) - Attributes: - errors : list
- Information about errors. It has two main attributes, namely train and valid. These attributes provide access to the training and validation errors respectively. 
- last_epoch : int
- Value equals to the last trained epoch. After initialization it is equal to 0. 
- n_updates_made : int
- Number of training updates applied to the network. 
 - Methods - predict(X) - Predicts output for the specified input. - train(X_train, y_train, X_test=None, y_test=None, epochs=100) - Train network. You can control network’s training procedure with epochs parameter. The X_test and y_test should be presented both in case network’s validation required after each training epoch. - fit(*args, **kwargs) - Alias to the train method. - init_train_updates()[source]
 - options = {'batch_size': Option(class_name='GradientDescent', value=IntProperty(name="batch_size")), 'loss': Option(class_name='BaseOptimizer', value=FunctionWithOptionsProperty(name="loss")), 'regularizer': Option(class_name='BaseOptimizer', value=Property(name="regularizer")), 'show_epoch': Option(class_name='BaseNetwork', value=IntProperty(name="show_epoch")), 'shuffle_data': Option(class_name='BaseNetwork', value=Property(name="shuffle_data")), 'signals': Option(class_name='BaseNetwork', value=Property(name="signals")), 'step': Option(class_name='BaseOptimizer', value=ScalarVariableProperty(name="step")), 'target': Option(class_name='BaseOptimizer', value=Property(name="target")), 'verbose': Option(class_name='Verbose', value=VerboseProperty(name="verbose"))}[source]