Execution Time6.20s

Test: tutorial-legacy-mlp-mlpHiggs (Passed)
Build: master-x86_64-centos7-gcc8-opt-no-rt-cxxmodules (olsnba08.cern.ch) on 2020-01-25 13:29:58

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Processing /data/sftnight/workspace/root-benchmark-no-rt-cxxmodules/BUILDTYPE/Release/COMPILER/gcc830/LABEL/performance-sandy-cc7/root/tutorials/legacy/mlp/mlpHiggs.C...
accessing mlpHiggs.root file from http://root.cern.ch/files
Info in <TMultiLayerPerceptron::Train>: Using 979 train and 979 test entries.
Training the Neural Network
Epoch: 0 learn=0.128367 test=0.127564
Epoch: 10 learn=0.0990026 test=0.0940797
Epoch: 20 learn=0.0913737 test=0.0887237
Epoch: 30 learn=0.0907341 test=0.0884268
Epoch: 40 learn=0.0902907 test=0.0877603
Epoch: 50 learn=0.0901993 test=0.0875579
Epoch: 60 learn=0.0900819 test=0.0879254
Epoch: 70 learn=0.0896193 test=0.0873073
Epoch: 80 learn=0.089285 test=0.0874124
Epoch: 90 learn=0.0887983 test=0.086714
Epoch: 99 learn=0.0885716 test=0.0868343
Training done.
test.py created.
Network with structure: @msumf,@ptsumf,@acolin:5:3:type
inputs with low values in the differences plot may not be needed
@msumf -> 0.0170839 +/- 0.0164142
@ptsumf -> 0.035914 +/- 0.0355045
@acolin -> 0.0332944 +/- 0.0311229