# MNIST MLP¶

class deepobs.tensorflow.testproblems.mnist_mlp.mnist_mlp(batch_size, weight_decay=None)[source]

DeepOBS test problem class for a multi-layer perceptron neural network on MNIST.

The network is build as follows:

• Four fully-connected layers with 1000, 500, 100 and 10 units per layer.
• The first three layers use ReLU activation, and the last one a softmax activation.
• The biases are initialized to 0.0 and the weight matrices with truncated normal (standard deviation of 3e-2)
• The model uses a cross entropy loss.
• No regularization is used.
Parameters: batch_size (int) -- Batch size to use. weight_decay (float) -- No weight decay (L2-regularization) is used in this test problem. Defaults to None and any input here is ignored.
dataset

The DeepOBS data set class for MNIST.

train_init_op

A tensorflow operation initializing the test problem for the training phase.

train_eval_init_op

A tensorflow operation initializing the test problem for evaluating on training data.

test_init_op

A tensorflow operation initializing the test problem for evaluating on test data.

losses

A tf.Tensor of shape (batch_size, ) containing the per-example loss values.

regularizer

A scalar tf.Tensor containing a regularization term. Will always be 0.0 since no regularizer is used.

accuracy

A scalar tf.Tensor containing the mini-batch mean accuracy.

set_up()[source]

Set up the multi-layer perceptron test problem instance on MNIST.