Standard Runner

class deepobs.tensorflow.runners.standard_runner.StandardRunner(optimizer_class, hyperparams)[source]

Provides functionality to run optimizers on DeepOBS testproblems including the logging of important performance metrics.

Parameters:
  • optimizer_class -- Optimizer class, which should inherit from tf.train.Optimizer and/or obey the same interface for .minimize().
  • hyperparams --

    A list describing the optimizer's hyperparameters other than learning rate. Each entry of the list is a dictionary describing one of the hyperparameters. This dictionary is expected to have the following two fields:

    • hyperparams["name"] must contain the name of the parameter (i.e., the exact name of the corresponding keyword argument to the optimizer class' init function.
    • hyperparams["type"] specifies the type of the parameter (e.g., int, float, bool).

    Optionally, the dictionary can have a third field indexed by the key "default", which specifies a default value for the hyperparameter.

Example

>>> optimizer_class = tf.train.MomentumOptimizer
>>> hyperparams = [
        {"name": "momentum", "type": float},
        {"name": "use_nesterov", "type": bool, "default": False}]
>>> runner = StandardRunner(optimizer_class, hyperparms)
run(testproblem=None, weight_decay=None, batch_size=None, num_epochs=None, learning_rate=None, lr_sched_epochs=None, lr_sched_factors=None, random_seed=None, data_dir=None, output_dir=None, train_log_interval=None, print_train_iter=None, tf_logging=None, no_logs=None, **optimizer_hyperparams)[source]

Runs a given optimizer on a DeepOBS testproblem.

This method receives all relevant options to run the optimizer on a DeepOBS testproblem, including the hyperparameters of the optimizers, which can be passed as keyword arguments (based on the names provided via hyperparams in the init function).

Options which are not passed here will automatically be added as command line arguments. (Some of those will be required, others will have defaults; run the script with the --help flag to see a description of the command line interface.)

Training statistics (train/test loss/accuracy) are collected and will be saved to a JSON output file, together with metadata. The training statistics can optionally also be saved in TensorFlow output files and read during training using Tensorboard.

Parameters:
  • testproblem (str) -- Name of a DeepOBS test problem.
  • weight_decay (float) -- The weight decay factor to use.
  • batch_size (int) -- The mini-batch size to use.
  • num_epochs (int) -- The number of epochs to train.
  • learning_rate (float) -- The learning rate to use. This will function as the base learning rate when implementing a schedule using lr_sched_epochs and lr_sched_factors (see below).
  • lr_sched_epochs (list) -- A list of epoch numbers (positive integers) that mark learning rate changes. The base learning rate is passed via learning_rate and the factors by which to change are passed via lr_sched_factors. Example: learning_rate=0.3, lr_sched_epochs=[50, 100], lr_sched_factors=[0.1 0.01] will start with a learning rate of 0.3, then decrease to 0.1*0.3=0.03 after training for 50 epochs, and decrease to 0.01*0.3=0.003 after training for 100 epochs.
  • lr_sched_factors (list) -- A list of factors (floats) by which to change the learning rate. The base learning rate has to be passed via learing_rate and the epochs at which to change the learning rate have to be passed via lr_sched_factors. Example: learning_rate=0.3, lr_sched_epochs=[50, 100], lr_sched_factors=[0.1 0.01] will start with a learning rate of 0.3, then decrease to 0.1*0.3=0.03 after training for 50 epochs, and decrease to 0.01*0.3=0.003 after training for 100 epochs.
  • random_seed (int) -- Random seed to use. If unspecified, it defaults to 42.
  • data_dir (str) -- Path to the DeepOBS data directory. If unspecified, DeepOBS uses its default /data_deepobs.
  • output_dir (str) -- Path to the output directory. Within this directory, subfolders for the testproblem and the optimizer are automatically created. If unspecified, defaults to '/results'.
  • train_log_interval (int) -- Interval of steps at which to log training loss. If unspecified it defaults to 10.
  • print_train_iter (bool) -- If True, training loss is printed to screen. If unspecified it defaults to False.
  • tf_logging (bool) -- If True log all statistics with tensorflow summaries, which can be viewed in real time with tensorboard. If unspecified it defaults to False.
  • no_logs (bool) -- If True no JSON files are created. If unspecified it defaults to False.
  • optimizer_hyperparams (dict) -- Keyword arguments for the hyperparameters of the optimizer. These are the ones specified in the hyperparams dictionary passed to the __init__.