Execution Time0.17s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: master-x86_64-fedora29-gcc8-dbg (root-fedora29-3.cern.ch) on 2019-11-14 10:13:39

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var00.305913
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      0.279     -0.2698 
   1 |      1.888       1.048 
   2 |     -1.544      -1.612 
   3 |      1.037    -0.02288 
   4 |      0.224     -0.4158 
   5 |      1.042     -0.4263 
   6 |     0.1752      0.8035 
   7 |    -0.3513     0.04678 
   8 |     0.1471     0.00911 
   9 |     0.1624     0.08924 

output BN 
output DL feature 0 mean 0.305913	output DL std 0.913211
output DL feature 1 mean -0.0750379	output DL std 0.724452
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |   -0.03108     -0.2834 
   1 |      1.826       1.633 
   2 |     -2.136      -2.236 
   3 |     0.8444     0.07588 
   4 |   -0.09456     -0.4958 
   5 |     0.8496      -0.511 
   6 |    -0.1509       1.278 
   7 |    -0.7586      0.1772 
   8 |    -0.1833      0.1224 
   9 |    -0.1657       0.239 

output BN feature 0 mean -8.32667e-17	output BN std 1.05402
output BN feature 1 mean 8.32667e-18	output BN std 1.05398
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |      1.033      -1.399      0.9911      -1.834 
   1 |    -0.9382     -0.5476       10.06      -6.397 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |      1.115     -0.0244      -1.079     0.06312 
   1 |     0.4455      0.1716     -0.8023      0.4353 

 training batch 2 mu var00.305916
compute loss for weight  1.11504  1.11503 result 3.81369
 training batch 3 mu var00.305913
compute loss for weight  1.11502  1.11503 result 3.81367
 training batch 4 mu var00.305914
compute loss for weight  1.11504  1.11503 result 3.81369
 training batch 5 mu var00.305913
compute loss for weight  1.11503  1.11503 result 3.81368
   --dy = 1.03333 dy_ref = 1.03333
 training batch 6 mu var00.305912
compute loss for weight  -0.0243855  -0.0243955 result 3.81367
 training batch 7 mu var00.305913
compute loss for weight  -0.0244055  -0.0243955 result 3.8137
 training batch 8 mu var00.305913
compute loss for weight  -0.0243905  -0.0243955 result 3.81367
 training batch 9 mu var00.305913
compute loss for weight  -0.0244005  -0.0243955 result 3.81369
   --dy = -1.39945 dy_ref = -1.39945
 training batch 10 mu var00.305913
compute loss for weight  -1.079  -1.07901 result 3.81369
 training batch 11 mu var00.305913
compute loss for weight  -1.07902  -1.07901 result 3.81367
 training batch 12 mu var00.305913
compute loss for weight  -1.079  -1.07901 result 3.81369
 training batch 13 mu var00.305913
compute loss for weight  -1.07901  -1.07901 result 3.81368
   --dy = 0.991074 dy_ref = 0.991074
 training batch 14 mu var00.305913
compute loss for weight  0.0631273  0.0631173 result 3.81366
 training batch 15 mu var00.305913
compute loss for weight  0.0631073  0.0631173 result 3.8137
 training batch 16 mu var00.305913
compute loss for weight  0.0631223  0.0631173 result 3.81367
 training batch 17 mu var00.305913
compute loss for weight  0.0631123  0.0631173 result 3.81369
   --dy = -1.83405 dy_ref = -1.83405
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      9.022      -1.395 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var00.305913
compute loss for weight  1.00001  1 result 3.81377
 training batch 19 mu var00.305913
compute loss for weight  0.99999  1 result 3.81359
 training batch 20 mu var00.305913
compute loss for weight  1.00001  1 result 3.81373
 training batch 21 mu var00.305913
compute loss for weight  0.999995  1 result 3.81364
   --dy = 9.02213 dy_ref = 9.02213
 training batch 22 mu var00.305913
compute loss for weight  1.00001  1 result 3.81367
 training batch 23 mu var00.305913
compute loss for weight  0.99999  1 result 3.8137
 training batch 24 mu var00.305913
compute loss for weight  1.00001  1 result 3.81367
 training batch 25 mu var00.305913
compute loss for weight  0.999995  1 result 3.81369
   --dy = -1.39477 dy_ref = -1.39477
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 | -7.216e-16   1.665e-16 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var00.305913
compute loss for weight  1e-05  0 result 3.81368
 training batch 27 mu var00.305913
compute loss for weight  -1e-05  0 result 3.81368
 training batch 28 mu var00.305913
compute loss for weight  5e-06  0 result 3.81368
 training batch 29 mu var00.305913
compute loss for weight  -5e-06  0 result 3.81368
   --dy = 1.9984e-10 dy_ref = -7.21645e-16
 training batch 30 mu var00.305913
compute loss for weight  1e-05  0 result 3.81368
 training batch 31 mu var00.305913
compute loss for weight  -1e-05  0 result 3.81368
 training batch 32 mu var00.305913
compute loss for weight  5e-06  0 result 3.81368
 training batch 33 mu var00.305913
compute loss for weight  -5e-06  0 result 3.81368
   --dy = -2.22045e-10 dy_ref = 1.66533e-16
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      -3.77      -1.936 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     -2.393      0.7205 

 training batch 34 mu var00.305913
compute loss for weight  -2.39336  -2.39337 result 3.81364
 training batch 35 mu var00.305913
compute loss for weight  -2.39338  -2.39337 result 3.81372
 training batch 36 mu var00.305913
compute loss for weight  -2.39337  -2.39337 result 3.81366
 training batch 37 mu var00.305913
compute loss for weight  -2.39338  -2.39337 result 3.8137
   --dy = -3.76963 dy_ref = -3.76963
 training batch 38 mu var00.305913
compute loss for weight  0.720518  0.720508 result 3.81366
 training batch 39 mu var00.305913
compute loss for weight  0.720498  0.720508 result 3.8137
 training batch 40 mu var00.305913
compute loss for weight  0.720513  0.720508 result 3.81367
 training batch 41 mu var00.305913
compute loss for weight  0.720503  0.720508 result 3.81369
   --dy = -1.93582 dy_ref = -1.93582
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][32m2.22045e-10[NON-XML-CHAR-0x1B][39m