MNIST VAE

class deepobs.pytorch.testproblems.mnist_vae.mnist_vae(batch_size, l2_reg=None)[source]

DeepOBS test problem class for a variational autoencoder (VAE) on MNIST.

The network has been adapted from the here and consists of an encoder:

  • With three convolutional layers with each 64 filters.
  • Using a leaky ReLU activation function with \(\alpha = 0.3\)
  • Dropout layers after each convolutional layer with a rate of 0.2.

and an decoder:

  • With two dense layers with 24 and 49 units and leaky ReLU activation.
  • With three deconvolutional layers with each 64 filters.
  • Dropout layers after the first two deconvolutional layer with a rate of 0.2.
  • A final dense layer with 28 x 28 units and sigmoid activation.

No regularization is used.

Parameters:
  • batch_size (int) -- Batch size to use.
  • l2_reg (float) -- No L2-Regularization (weight decay) is used in this test problem. Defaults to None and any input here is ignored.
data

The DeepOBS data set class for MNIST.

loss_function

The loss function for this testproblem (vae_loss_function as defined in testproblem_utils)

net

The DeepOBS subclass of torch.nn.Module that is trained for this tesproblem (net_vae).

get_batch_loss_and_accuracy_func(reduction='mean', add_regularization_if_available=True)[source]

Get new batch and create forward function that calculates loss and accuracy (if available) on that batch.

Parameters:
  • reduction (str) -- The reduction that is used for returning the loss. Can be 'mean', 'sum' or 'none' in which case each indivual loss in the mini-batch is returned as a tensor.
  • add_regularization_if_available (bool) -- If true, regularization is added to the loss.
Returns:

The function that calculates the loss/accuracy on the current batch.

Return type:

callable

set_up()[source]

Sets up the vanilla CNN test problem on MNIST.