Fashion-MNIST VAE

class deepobs.pytorch.testproblems.fmnist_vae.fmnist_vae(batch_size, weight_decay=None)[source]

DeepOBS test problem class for a variational autoencoder (VAE) on Fashion-MNIST.

The network has been adapted from the here and consists of an encoder:

  • With three convolutional layers with each 64 filters.
  • Using a leaky ReLU activation function with \(\alpha = 0.3\)
  • Dropout layers after each convolutional layer with a rate of 0.2.

and an decoder:

  • With two dense layers with 24 and 49 units and leaky ReLU activation.
  • With three deconvolutional layers with each 64 filters.
  • Dropout layers after the first two deconvolutional layer with a rate of 0.2.
  • A final dense layer with 28 x 28 units and sigmoid activation.

No regularization is used.

Parameters:
  • batch_size (int) -- Batch size to use.
  • weight_decay (float) -- No weight decay (L2-regularization) is used in this test problem. Defaults to None and any input here is ignored.
data

The DeepOBS data set class for Fashion-MNIST.

loss_function

The loss function for this testproblem (vae_loss_function as defined in testproblem_utils)

net

The DeepOBS subclass of torch.nn.Module that is trained for this tesproblem (net_vae).

get_batch_loss_and_accuracy_func(reduction='mean', add_regularization_if_available=True)[source]

Gets a new batch and calculates the loss and accuracy (if available) on that batch. This is a default implementation for image classification. Testproblems with different calculation routines (e.g. RNNs) overwrite this method accordingly.

Parameters:return_forward_func (bool) -- If True, the call also returns a function that calculates the loss on the current batch. Can be used if you need to access the forward path twice.
Returns:loss and accuracy of the model on the current batch. If return_forward_func is True it also returns the function that calculates the loss on the current batch.
Return type:float, float, (callable)
set_up()[source]

Sets up the test problem.