Execution Time0.11s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: master-x86_64-fedora29-gcc8-dbg (root-fedora29-2.cern.ch) on 2019-11-13 14:44:49

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var00.0612212
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.2571      0.3349 
   1 |    -0.5192      0.3633 
   2 |      1.097      0.6363 
   3 |    -0.8568      0.1362 
   4 |     0.1136      0.3026 
   5 |     0.2424      0.7642 
   6 |     -1.377      -1.779 
   7 |      1.475       1.337 
   8 |     0.4309    0.007932 
   9 |     -0.251     -0.1135 

output BN 
output DL feature 0 mean 0.0612212	output DL std 0.858191
output DL feature 1 mean 0.198997	output DL std 0.810517
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.2405      0.1767 
   1 |    -0.7128      0.2137 
   2 |      1.272      0.5686 
   3 |     -1.128     -0.0816 
   4 |    0.06434      0.1347 
   5 |     0.2225      0.7349 
   6 |     -1.766      -2.572 
   7 |      1.736        1.48 
   8 |      0.454     -0.2485 
   9 |    -0.3835     -0.4063 

output BN feature 0 mean 1.66533e-17	output BN std 1.05401
output BN feature 1 mean 1.66533e-17	output BN std 1.054
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |   -0.03507     0.03797     0.07807      0.1379 
   1 |   -0.08021     -0.2881     -0.2551      0.3178 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |    0.03772      0.6337      0.6133     -0.5115 
   1 |     0.4971      0.5678     0.07937     -0.7666 

 training batch 2 mu var00.061224
compute loss for weight  0.0377308  0.0377208 result 0.166758
 training batch 3 mu var00.0612212
compute loss for weight  0.0377108  0.0377208 result 0.166759
 training batch 4 mu var00.0612219
compute loss for weight  0.0377258  0.0377208 result 0.166758
 training batch 5 mu var00.0612212
compute loss for weight  0.0377158  0.0377208 result 0.166759
   --dy = -0.0350737 dy_ref = -0.0350737
 training batch 6 mu var00.0612207
compute loss for weight  0.633662  0.633652 result 0.166759
 training batch 7 mu var00.0612212
compute loss for weight  0.633642  0.633652 result 0.166758
 training batch 8 mu var00.061221
compute loss for weight  0.633657  0.633652 result 0.166759
 training batch 9 mu var00.0612212
compute loss for weight  0.633647  0.633652 result 0.166758
   --dy = 0.0379722 dy_ref = 0.0379722
 training batch 10 mu var00.0612215
compute loss for weight  0.613356  0.613346 result 0.166759
 training batch 11 mu var00.0612212
compute loss for weight  0.613336  0.613346 result 0.166758
 training batch 12 mu var00.0612213
compute loss for weight  0.613351  0.613346 result 0.166759
 training batch 13 mu var00.0612212
compute loss for weight  0.613341  0.613346 result 0.166758
   --dy = 0.0780728 dy_ref = 0.0780728
 training batch 14 mu var00.0612211
compute loss for weight  -0.511525  -0.511535 result 0.16676
 training batch 15 mu var00.0612212
compute loss for weight  -0.511545  -0.511535 result 0.166757
 training batch 16 mu var00.0612212
compute loss for weight  -0.51153  -0.511535 result 0.166759
 training batch 17 mu var00.0612212
compute loss for weight  -0.51154  -0.511535 result 0.166758
   --dy = 0.137932 dy_ref = 0.137932
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.4414     -0.1079 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var00.0612212
compute loss for weight  1.00001  1 result 0.166763
 training batch 19 mu var00.0612212
compute loss for weight  0.99999  1 result 0.166754
 training batch 20 mu var00.0612212
compute loss for weight  1.00001  1 result 0.166761
 training batch 21 mu var00.0612212
compute loss for weight  0.999995  1 result 0.166756
   --dy = 0.441402 dy_ref = 0.441402
 training batch 22 mu var00.0612212
compute loss for weight  1.00001  1 result 0.166757
 training batch 23 mu var00.0612212
compute loss for weight  0.99999  1 result 0.16676
 training batch 24 mu var00.0612212
compute loss for weight  1.00001  1 result 0.166758
 training batch 25 mu var00.0612212
compute loss for weight  0.999995  1 result 0.166759
   --dy = -0.107885 dy_ref = -0.107885
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |  1.388e-17  -1.128e-17 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var00.0612212
compute loss for weight  1e-05  0 result 0.166758
 training batch 27 mu var00.0612212
compute loss for weight  -1e-05  0 result 0.166758
 training batch 28 mu var00.0612212
compute loss for weight  5e-06  0 result 0.166758
 training batch 29 mu var00.0612212
compute loss for weight  -5e-06  0 result 0.166758
   --dy = 4.62593e-13 dy_ref = 1.38778e-17
 training batch 30 mu var00.0612212
compute loss for weight  1e-05  0 result 0.166758
 training batch 31 mu var00.0612212
compute loss for weight  -1e-05  0 result 0.166758
 training batch 32 mu var00.0612212
compute loss for weight  5e-06  0 result 0.166758
 training batch 33 mu var00.0612212
compute loss for weight  -5e-06  0 result 0.166758
   --dy = -3.70074e-12 dy_ref = -1.12757e-17
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |    -0.7622     -0.4379 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |    -0.5791      0.2464 

 training batch 34 mu var00.0612212
compute loss for weight  -0.579107  -0.579117 result 0.166751
 training batch 35 mu var00.0612212
compute loss for weight  -0.579127  -0.579117 result 0.166766
 training batch 36 mu var00.0612212
compute loss for weight  -0.579112  -0.579117 result 0.166755
 training batch 37 mu var00.0612212
compute loss for weight  -0.579122  -0.579117 result 0.166762
   --dy = -0.762197 dy_ref = -0.762197
 training batch 38 mu var00.0612212
compute loss for weight  0.246371  0.246361 result 0.166754
 training batch 39 mu var00.0612212
compute loss for weight  0.246351  0.246361 result 0.166763
 training batch 40 mu var00.0612212
compute loss for weight  0.246366  0.246361 result 0.166756
 training batch 41 mu var00.0612212
compute loss for weight  0.246356  0.246361 result 0.166761
   --dy = -0.437913 dy_ref = -0.437913
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][32m1.94866e-10[NON-XML-CHAR-0x1B][39m